Skip to main content

· 4 min read

The purpose of this article is to explain the architecture and the technology stack used in the creation of the Eco-system that was delivered to a Fortune 500 public retail company. Due to NDA agreements, I’m obligated to obscure the actual purposes and specific use cases of those applications.

Background

Our company contracted with this Fortune 500 public retail company company, and let’s refer to it as X from now, to build a data collection application of some type (beyond the scope of this article). That application was built in a Monolithic architecture. Meaning, One application, one project that connects all the application resources, serves the web pages, handles all the HTTP and AJAX requests. The application would also interface with the databases, caching services, ActiveDirectory, and anything else and the application may utilize. For that time and given that it was only application and we weren’t envisioning a development for a whole eco-system. And Honestly, it was my very first project in Node.js or anything for that matter and I don’t think I even know what a microservices would even mean. 🤔 😗

The first database design was consisting of about 16 tables. (I also can’t share that) but it’s worth mentioning that I designed that database as I was taking Database 1 and that was my first ever database design. I even remember I would go to class and come back and make more changes (Just like that). The current database was redesigned from the ground-up to to accommodate the new applications. It was also better normalized and optimized by the addition of multiple indexes to improve the performance of the queries. Now it’s 50 tables and 4 views.


Fast Forward ...

Apparently the company X liked what we did and they came back to us with 4 new projects and V2.0 of the the collection application with more functionality, business rule and administrative capabilities. Then we had to rethink, the architectural choices that we made before. Including the database design I was actually done with Database I and Database II then


System Architecture

The current architecture consists of 11 Microservices, connected together through a unifying RESTful API. All the applications and mobile apps communicate with the system resources and with each other through that API. The microservices, depending on their purposes and functionality, would interact with one or more of the following pieces (The database server (Microsoft SQL Server), the Caching Service/ Session Manager (Redis), ActiveDirectory (User and Domain Management), Queueing service (RabbitMQ), Machine Learning and data processing service (R servers), Reporting Service (CrystalReports) .

System Architecture Diagram

Server architecture

In this project, we used 6 servers; 3 development servers located and managed by our office and 3 servers for Staging and Production Servers that are managed by the client but we have access to do some debugging and deployment to their staging servers.

Server Architecture and setup

I was responsible for managing and development servers and configuring the Continuous Integration CI builds. And the Lead Developer was responsible for deploying our code to their staging servers.

Technology Stack:

  • Node.js / Restify
  • Microsoft SQL Server
  • Bookshelf.js / Knex
    • The micro service uses an Object Relational Mapping Module for data access to abstract away the vendor-specific SQL syntax which makes the application more portable and more easily-switchable to any SQL flavor (MSSQL, MySQL, PostgreSQL, … etc.)
      • we used to use Sequelize but we switched over to bookshelf/Knex as it gives us more control over the queries and at the time it seemed to be much easier.
  • Redis
  • RabbitMQ
  • DeployR / R Server
  • ActiveDirectory
  • Drone.io

Development Style

The development was intended to, and did start, in a Test-Driven-Development TDD. But I confess, we didn't follow through leading to a tremendous technical debt. A problem that we started to slowly reduce, by writing more unit tests and by utilizing applications such as PAW. Which allows us to create collections of HTTP requests with its parameters and run them all it once making it easy to spot any failures to quickly go fix it.

We also started a process to review and accept any features and bug fixes; the project manager is the only person who's allowed to close any issue or task item. Though this might not be the best solution, but it's working for the time being until we pay our debt.

Technical Debts

· 20 min read

Abstract

Machine Learning (ML) is a subfield of Artificial Intelligence (AI) that involves designing systems that are able to recognize patterns and train itself without being explicitly programmed. Machine learning has been involved in many fields and domains. This paper will try to focus on the algorithms and techniques that are being used in the context of improving sales in public retail stores. I will detail the main corpus of the algorithm logic and its type to relatively introduce new researchers to the field

Introduction

Machine learning has been involved in so many fields and domains. Machine Learning (ML) is subfield of Artificial Intelligence (AI) in which systems are designed to be able to recognize the patterns in the data and improve its experience into a learning model that can be applied to new sets of data providing new insights and results that wouldn’t be easily retrievable elsehow.

Tom M. Mitchel, one of the pioneers and main contributors to the field of machine learning and the chair of Machine Learning at Carnegie Mellon University, defines machine learning as a system that “learns from experience E with respect to some class of tasks T and performance measure P, if its performance P at tasks in T, as measured by P, improves with experience E.” (Mitchell, 1997)

Machine Learning, however, is only as good as its training data and multiple algorithms were found to have completely different results depending on the training datasets fed to the system. An understanding of the mathematical foundation of the algorithms is important to understand how the algorithm can be implemented as accurate, and efficient, as the algorithm can be.

Machine Learning Algorithm Types by Learning Style.

While people have longed for computers to be able to learn as well as they do, only certain types of learning tasks were achieved towards that goal. These learning types are chiefly classified into two main classes of learning experience: Supervised and Unsupervised learning. New types have also emerged from combining the types into what’s become known as Semi- Supervised Learning.

Supervised Learning

In supervised learning, the system is provided with datasets of inputs and outputs of known attributes and properties and the system is developed to learn from the training datasets to come up with a mapping function. The system then becomes able to determine the correct output has it been fed with new datasets of unknown outputs. Classification and Regression algorithms are examples of supervised learning. Classification algorithms are used to categorize inputs into known sets of discrete categories or classes. Regression, however, is used when the outputs are expected to be continuous values rather than groups.

Unsupervised Learning

Unsupervised learning is performed when no certain type of output is expected. The system is not provided with the correct response. Instead, it tries to unearth the hidden patterns of the provided training sets (Marsland, 2015). Unsupervised learning is mostly utilized in data mining applications, and some even believe that it is more of data mining than of machine learning (Bell, 2014). An example of this type is the Clustering algorithms. The next sections will discuss various machine learning algorithms, providing a brief description, advantages, and limitations, as well as some current and potential applications of the algorithm in the context of public retail businesses.

Machine Learning ALgorithms

Neural Networks

Artificial Neural Networks (ANN), or Neural Networks for short, was first modeled upon the architecture of the human brain’s neurons; neurons are interconnected together with axons in a way that data and inputs propagates through them while being processed. Analogically, nodes are related together through sets of adaptive weights (axons) that are adjusted as the algorithm learns and adapt to the input data. Figure1 (Glosser.ca, 28 February 2013)

Neural Networks algorithm has various types of learning paradigms and styles that allow them to be classified as either supervised or unsupervised. Backpropagation is one of the learning paradigms widely used in many artificial intelligence applications. The learning process is done in two stages: feed-forward propagation, in which inputs are fed to the randomly- weighted neurons, and secondly, the backward propagation, in which error is calculated as the difference between the anticipated and the actual outputs. The algorithm then uses a gradient descent optimization function iteratively to reduce that deviation from the actual to expected results, and updates the weights accordingly (Marsland, 2015).

Neural Networks had evolved extensively into many forms such as back-propagation such as Perceptron, Hopfield Network, and Radial Basis Function Network (RBFN). NN algorithms have been employed in a vast array of applications such as regression analysis or function approximation due to its capability of approximating non-linear functions. Neural Networks algorithms are also used in classification, and pattern recognition problems. Manyresearch projects have been conducted in the field of sales forecasting using neural networks combined with other clustering and extreme learning machine algorithms and techniques (Lu & Kao, 2016). However, as discussed, since NN implements gradient decent as its optimization function, it’s possible for it to get trapped in and local minima that are not necessarily the global minimum error causing the function not to converge to the desired function.(Mitchell, 1997).

Clustering

One of the crucial tasks of data mining and machine learning is to be able to classify large sets of data group them clusters and categories that can stimulate further insights and from the data. Clustering is an unsupervised learning problem in which the algorithm is only provided with input and is trained to analyze the properties of the provided elements to find the similarities between them and cluster them based on them. It’s different from classification algorithms in the sense that it doesn’t know what the possible classes can be (Bose & Mahapatra, 2001). Clustering helps in the discovery of new unknown but useful classes and categories (Maglogiannis, 2007). Figure2 (hellisp, 2010)

Clustering techniques can be classified into two main categories based on the clustering criteria: connectivity and compactness. K-means is one of the compactness-clustering algorithms. “The aim of the K-mean algorithm is to divide M points in N dimensions into K clusters so that the within-cluster sum of squares is minimized” (Hartigan & Wong, 1979). K- means starts by initializing K centroids from which the distance to the points is to be calculated. The algorithm runs iteratively to reduce the distances to the centroids. The process continues running, recalculating and updating the positon of the centroids until the points no longer fluctuate between the centroids and then it can be assumed that the clustering process is complete (Bell, 2014). K-means, however, have some limitations and deficiencies: K-means clustering model relies upon reducing the distances to the centroid and not on closeness to the neighboring points (Wu, 2012). K-means algorithm also tends to create clusters of similar sizes. These factors together may result into misclassifying the points as shown in the figure. Though, there exists two visual clusters, K-means doesn’t always perform as expected with datasets that can potentially have clusters of different densities and sizes. Figure3 (Dung, 2015) Although there are 2 visual clusters, K-means misclassifies it (The green dots are the cluster centroids)

Clustering algorithms is often used, in the context of public retail sales improvements, to discover customers’ buying patterns or to reclassify their branches and stores into new classes and clusters to at which different marketing strategies can be performed.

Support Vector Machines

Support Vector Machine (SVM) is an algorithm that, given labeled datasets (supervised learning algorithm) can be used for classification and regression analysis applications (Bell, 2014; Marsland, 2015). It’s different from the afore-discussed k-means algorithm in the overall task definition, learning style type, and the nature of algorithm itself. While other clustering algorithms tend to cluster the elements around the centroid of each class, SVM tries to find the decision boundary between the classes where the items of each class are, theoretically, separated entirely by a margined decision line. The algorithm thus defines what the attributes of the classes are rather than classifying the elements based on what the common element looks like. Figure4 Figure5

The algorithm was first introduced by Vladimir Vapnik as a novel solution to the two-group classification problem (Cortes & Vapnik, 1995). The main aim of the algorithm is to find the optimum classifying boundary line (hyperplane) that has the maximum distance between the instances of either class (Kirk, 2014). The computation of the algorithm winds up to a simple quadratic system that is cheap computationally. The mathematical foundation of the algorithm is beyond the scope of this introductory paper. However, you can refer to the excellent study of Burges (Burges, 1998). Figure6 H3 line is the optimum separation hyperplane

The algorithm also overcomes the problem of the inseparable training datasets and the non-linear datasets that can’t be separated by a single surface by applying what’s called, the Kernel Functions or the Kernel Tricks (Bell, 2014; Kirk, 2014; Marsland, 2015). The Kernel functions transform the datasets from a two-dimensional space to higher dimensional space. After transforming into a sufficiently appropriate number of dimensions, the same technique can be applied. “A linear separation in feature space corresponds to a non-linear segregationin the original input space (Kotsiantis, Zaharakis, & Pintelas, 2006)”. Choosing the kernel function applied and the number of dimensions outputted has a substantial effect on the performance of the classifier (Smola, Schölkopf, & Müller, 1998). Support Vector Machines algorithms can be used in the business optimization context as proposed by Yang, et.al(Yang, Deb, & Fong, 2011). It’s also utilized in the measuring retail companies’ performances and predicting financial distress situations (Fan, 2000; Hu & Ansell, 2007).

Decision Trees

Decision Trees are supervised machine learning algorithms that chiefly aims at producing a model that predicts the classification of an instance by sorting it through a flowchart-like classification model that is learned from the training datasets (Mitchell, 1997; Rokach & Maimon, 2014). The output model can be presented as a consecutive set of if-statements. Each node of the represent an attribute by which the data can be further separated into branches each of which is a possible value of the tested attribute and the data keeps branching until it reaches a leaf (Mitchell, 1997). There are two types of the decision trees: Classification Trees, in which the final leafs of the trees are discrete elements or classes, and Regression Trees, in which the output is a real value or number such as a house number (Loh, 2011). In this paper, we’ll discuss on the classification trees and more specifically, the Iterative Dichotomiser 3 (ID3) algorithm. Figure7

The algorithm starts off trying to find the best attribute that should be tested at the root node of the decision tree. To do that, we have to define two statistical concepts: Information Gain and Entropy. Entropy is the measure of impurity and unpredictability of the information and it ranges from zero to one; zero being that there are no impurities and that all the training data belongs to one class. Information gain, however, is the measure of how well a property is able to segregate the training datasets towards the target classifications. Information gain can be presented as the expected reduction in entropy as a result of separating the data according to a certain property or attribute. The property with the highest score of information gain is then placed at the root node of the decision tree (Mitchell, 1997). The process then repeated for each set of data passed through the branches until the leaf nodes are reached. Decision Trees are subject to the overfitting problem where the algorithm doesn’t map the underlying relationship but rather describes and adapts too perfectly to the datasets in a manner that the algorithm wouldn’t be applicable on real word (Marsland, 2015). This problem causes the algorithm to lose accuracy and reliability (HSSINA, Merbouha, Ezzikouri, & Erritali, 2014; Mitchell, 1997). Many techniques have been developed to address this issue such as the method known as the reduced error pruning. Pruning a decision node means to detach the subtree rooted at the node making it a leaf node with the value of the average classification of the items related to it. The pruned branches are removed if the algorithm validated that it didn’t perform any worse before removing it (Mitchell, 1997). Decision trees algorithm can be used guide the decision-making process as it help describe the problems understudy in a systematic and structured fashion.

Machine Learning Scenarios and Applications in the Retailing Business

As the computing power exponentially increases, machine learning gains more momentum as it becomes more practicable and implementable. The current computation power is capable of reliably storing and analyzing tremendous amount of data –Big Data-, allowing for more complex machine learning models to be implemented. However, many of the research in machine learning is done for the sake of machine learning. Many researchers tend to concentrate their work on further perfecting the performance of the existing algorithms (Wagstaff, 2012). However, not much of the research communicates back to the originating problems and domains. The following section will present some of the current and potential uses of machine learning in the retailing business, presenting recent works and publications in an attempt to fill in the gap that’s capping the impact that machine learning can offer. Machine learning can be used in business to optimize sales, improve marketing strategies, forecast sales and revenue, and to predict and analyze the risks that businesses may endure. According to research conducted by Accenture Institute for High Performance upon enterprises with $500M or more in sales, 76% of the companies are investing in machine learning research and are targeting higher sales growth in their businesses with Machine learning(Wilson, Mulani, & Alter, 2016). Machine learning gives companies and enterprises the opportunity to finally put the data they collect throughout the years to use (Columbus, 2016); to transform the data into useful information and insights that drives the future of their businesses.

Recommendations

Almost all the online shopping services that retailing companies provides uses machine learning and recommender systems algorithms (Cho, Kim, & Kim, 2002; Kim & Ahn, 2008; Senecal & Nantel, 2004). It extends the domain of big data analytics and allows for an exceptional shopping experience. As the algorithms learn more about the users, it will be able to match buyers and sellers based on the customer’s need and the product availability. Recommender systems can also be used to simplify the heavy-lifting processes and operations for the businesses. It can also be used to ensure that the stores are adequately stocked based on the predicted shopping trends of the population surrounding them and recommend and optimize the prices of the products and the shelf placement. (Walter, Battiston, Yildirim, & Schweitzer, 2012) propose a recommender system that can be put into action in retail stores. The proposal involves smart carts with chip readers to identify the items in the cart and authenticate the shoppers through their loyalty cards. The carts would recommend items to the shopper and predict items that they may have missed. Also, Recommender System algorithm can be used at the store level to recommend products based on the demographics of the customers and the location of the store (Giering, 2008). The North Face brand already tapped into the powers of the recommender systems machine learning algorithms and utilized the IBM’s Watson API to combine recommender systems with Natural-Language Processing (NLP) to provide a personal assistant buyer to their online customers (O’Rourke, 2016). A shopper can simply say or type, “I’m going on a skiing trip next week in Colorado.” The system will fuse its collection of products with weather forecasting data, the buyer’s preferences and shopping history to provide a personalized and a unique shopping experience (Gaudin, 2016).

Price Optimization

Many research has been done on the price-based revenue management. (Özer & Phillips, 2012; Talluri & Van Ryzin, 2006). Employing dynamic pricing strategy was found to achieve higher revenue (Surowiecki, 2014). Dynamic Pricing can take into account the demand on the product, the competitor pricing and other factors and machine learning will always be able to accommodate those needs. Many retails and businesses such as 7-Eleven, O’Reilly Auto Parts, and many others (Shish, 2015)are leveraging machine learning algorithms to implement those techniques. Also many research was conducted to develop and implement pricing decision support tools for retailers (Ferreira, Lee, & Simchi-Levi, 2015). Caro and Gallien (2012) address the challenge of optimizing and recommending price changes in the fashion retail business, where items are aimed to have short product life cycle. The proposed system replaced the manual, informal process that was in place in the fashion company, Zara, and increased their revenue by 6%. This work is the first large-scale, multi- product price optimization machine learning model in which all the technical and implementation details along with the impact results are available to the public. Extensive literature review on the topic can be found in (Özer & Phillips, 2012; Talluri & Van Ryzin, 2006; Van Ryzin & Talluri, 2005).

Sales and Marketing Campaign Management.

Fraud Prevention

Conclusion

Machine Learning is no longer the preserve of researchers; It has been involved in many applications and innovations in different technological and commercial domains. The massive bulk of big data boosted the potential of what it can achieve. Machine learning algorithms differs from normal algorithms in the sense that they are not very predictable, making it harder to debug and improve. Many strategies and techniques, however, are developed to tune the algorithms and identify the areas of improvements for the models to accurately operate. Data Analytics, seizing a huge sector of the all the business decision-making process, opens the doors wide open for machine learning to be perceived as the analytic tool that perfectly fits the world of big data. Businesses started utilizing machine learning models to extract new insights and to shape their strategic visions of their operations. This paper gives a brief introduction to the field and presents some of the common algorithms, elucidating their processes and steps. It also lists the different types of learning styles, explaining the major differences in their logic and tasks that they can perform. Finally, the research presents some of the applications and uses of machine learning in the public retailing business. Machine Learning has the potential to break down many of the limits induced by the traditional analytical approaches and grants the organizations the opportunity to make well- informed decisions. Many organizations have already put machine learning models into action and banked significant portion of their operations on it. It’s important to keep in mind that despite all the current advancements and capabilities of machine learning, it is yet to replace the human judgment and executives should begin to keep up with the advances in the field lest they fall far behind and endanger the continuity of their businesses.


References:

Software design pattern - Wikipedia Bell, J. (2014). Machine Learning: Hands-on for developers and technical professionals: John Wiley & Sons.

Bose, I., & Mahapatra, R. K. (2001). Business data mining—a machine learning perspective. Information & management, 39(3), 211-225. doi:Doi 10.1016/S0378-7206(01)00091-X

Burges, C. J. (1998). A tutorial on support vector machines for pattern recognition. Data mining and knowledge discovery, 2(2), 121-167. doi:Doi 10.1023/A:1009715923555

Caro, F., & Gallien, J. (2012). Clearance pricing optimization for a fast-fashion retailer. Operations Research, 60(6), 1404-1422.

Cho, Y. H., Kim, J. K., & Kim, S. H. (2002). A personalized recommender system based on web usage mining and decision tree induction. Expert Systems with Applications, 23(3), 329- 342.

Columbus, L. (2016, June 4, 2016). Machine Learning Is Redefining The Enterprise In 2016. Forbes Magazine.

Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273-297. doi:10.1007/bf00994018

Fan, A. (2000). Selecting bankruptcy predictors using a support vector machine approach. Paper presented at the Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 6-Volume 6.

Ferreira, K. J., Lee, B. H. A., & Simchi-Levi, D. (2015). Analytics for an online retailer: Demand forecasting and price optimization. Manufacturing & Service Operations Management, 18(1), 69-88.

Gaudin, S. (2016). The North Face sees A.I. as a perfect fit. Retrieved from Computerworld website:

Giering, M. (2008). Retail sales prediction and item recommendations using customer demographics at store level. ACM SIGKDD Explorations Newsletter, 10(2), 84-89.

Hartigan, J. A., & Wong, M. A. (1979). Algorithm AS 136: A k-means clustering algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1), 100-108.

HSSINA, B., Merbouha, A., Ezzikouri, H., & Erritali, M. (2014). A comparative study of decision tree ID3 and C4. 5. Int. J. Adv. Comput. Sci. Appl, 4(2).

Hu, Y.-C., & Ansell, J. (2007). Measuring retail company performance using credit scoring techniques. European Journal of Operational Research, 183(3), 1595-1606. doi:http://dx.doi.org/10.1016/j.ejor.2006.09.101

Kim, K.-j., & Ahn, H. (2008). A recommender system using GA K-means clustering in an online shopping market. Expert Systems with Applications, 34(2), 1200-1209.

Kirk, M. (2014). Thoughtful Machine Learning: A Test-driven Approach: " O'Reilly Media, Inc.".

Kotsiantis, S. B., Zaharakis, I. D., & Pintelas, P. E. (2006). Machine learning: a review of classification and combining techniques. Artificial Intelligence Review, 26(3), 159-190. doi:10.1007/s10462-007-9052-3

Loh, W. Y. (2011). Classification and regression trees. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 1(1), 14-23. doi:10.1002/widm.8

Lu, C.-J., & Kao, L.-J. (2016). A clustering-based sales forecasting scheme by using extreme learning machine and ensembling linkage methods with applications to computer server. Engineering Applications of Artificial Intelligence, 55, 231-238. doi:10.1016/j.engappai.2016.06.015

Maglogiannis, I. G. (2007). Emerging artificial intelligence applications in computer engineering: real word AI systems with applications in eHealth, HCI, information retrieval and pervasive technologies (Vol. 160): Ios Press.

Marsland, S. (2015). Machine learning: an algorithmic perspective: CRC press.

Mitchell, T. M. (1997). Machine learning. 1997. Burr Ridge, IL: McGraw Hill, 45, 37.

O’Rourke, J. (2016). How Machine Learning Will Improve Retail and Customer Service[Retrieved from Data Informed website: http://data-informed.com/how-machine- learning-will-improve-retail-and-customer-service/

Özer, Ö., & Phillips, R. (2012). The Oxford handbook of pricing management: Oxford University Press.

Rokach, L., & Maimon, O. (2014). Data mining with decision trees: theory and applications: World scientific.

Senecal, S., & Nantel, J. (2004). The influence of online product recommendations on consumers’ online choices. Journal of retailing, 80(2), 159-169.

Shish. (2015). Big Data & Machine Learning Scenarios for Retail. Retrieved from Microsoft Developer Blog website: https://blogs.msdn.microsoft.com/shishirs/2015/01/26/big-data- machine-learning-scenarios-for-retail/

Smola, A. J., Schölkopf, B., & Müller, K.-R. (1998). The connection between regularization operators and support vector kernels. Neural networks, 11(4), 637-649. doi:Doi 10.1016/S0893-6080(98)00032-X

Surowiecki, J. (2014). In praise of efficient price gouging.

Talluri, K. T., & Van Ryzin, G. J. (2006). The theory and practice of revenue management (Vol. 68): Springer Science & Business Media.

Van Ryzin, G. J., & Talluri, K. T. (2005). An introduction to revenue management. Tutorials in operations research, 142-195.

Wagstaff, K. (2012). Machine learning that matters. arXiv preprint arXiv:1206.4656.

Walter, F. E., Battiston, S., Yildirim, M., & Schweitzer, F. (2012). Moving recommender systems from on-line commerce to retail stores. Information Systems and E-Business Management, 10(3), 367-393. doi:10.1007/s10257-011-0170-8

Wilson, H. J., Mulani, N., & Alter, A. (2016). Sales Gets a Machine-Learning Makeover. MIT Sloan Management Review, May, 17.

Wu, J. (2012). The Uniform Effect of K-means Clustering Advances in K-means Clustering (pp. 17-35): Springer.

Yang, X.-S., Deb, S., & Fong, S. (2011). Accelerated Particle Swarm Optimization and Support Vector Machine for Business Optimization and Applications. In S. Fong (Ed.), Networked Digital Technologies: Third International Conference, NDT 2011, Macau, China, July 11- 13, 2011. Proceedings (pp. 53-66). Berlin, Heidelberg: Springer Berlin Heidelberg.


Footnotes:

· 4 min read

Design Pattern is essentially an architectural term that was introduced in the software engineering context by Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides, The Gang of Four GoF, in their book Design Patterns: Elements of Reusable Object-Oriented Software by GoF in 1994 (1).

Design patterns are a collection of reusable solutions and best practices to common problems in software design. It’s not an algorithm that you can use directly into your code, rather it’s a template, UML diagrams and descriptions of how you should go about solving this problem, how should you design the classes and link them together. Those patterns are proven, tested paradigm that maintains the code modular, clean and easily maintained.

Many says that you need to perfect your understanding of OOP before you could actually understand Design Patterns. And For me, personally, I found it so much easier to understand Object-Oriented concepts and terms after my instructor on one of my classes introduced Design Patterns to us. Many of the OOP concepts (abstract classes, polymorphism, interfaces), while I understood how they work and memorized some of their uses, became so much clearer when I started learning design patterns and started to appreciate the beauty of OO.

The gang of four broke down their design patterns into a 3 categories. I think now they’re 4 categories. Nevertheless, Below is a list of patterns that the GoF introduced sorted out by their type.

In this blog, I’ll share with you some examples and cases where Design patterns really came to my aid when I worked on different problems.

Observer Design Pattern is coming up soon…

The following is copied From Design Patterns - Wikipedia so I can link them to my blog posts in the future

Patterns by Type

Creational

Creational patterns are ones that create objects for you, rather than having you instantiate objects directly. This gives your program more flexibility in deciding which objects need to be created for a given case.

  • Abstract factory pattern groups object factories that have a common theme.
  • Builder pattern constructs complex objects by separating construction and representation.
  • Factory method pattern creates objects without specifying the exact class to create.
  • Prototype pattern creates objects by cloning an existing object.
  • Singleton pattern restricts object creation for a class to only one instance.

Structural

These concern class and object composition. They use inheritance to compose interfaces and define ways to compose objects to obtain new functionality.

  • Adapter allows classes with incompatible interfaces to work together by wrapping its own interface around that of an already existing class.
  • Bridge decouples an abstraction from its implementation so that the two can vary independently.
  • Composite composes zero-or-more similar objects so that they can be manipulated as one object.
  • Decorator dynamically adds/overrides behavior in an existing method of an object.
  • Facade provides a simplified interface to a large body of code.
  • Flyweight reduces the cost of creating and manipulating a large number of similar objects.
  • Proxy provides a placeholder for another object to control access, reduce cost, and reduce complexity.

Behavioral

Most of these design patterns are specifically concerned with communication between objects.

  • Chain of responsibility delegates commands to a chain of processing objects.
  • Command creates objects which encapsulate actions and parameters.
  • Interpreter implements a specialized language.
  • Iterator accesses the elements of an object sequentially without exposing its underlying representation.
  • Mediator allows loose coupling between classes by being the only class that has detailed knowledge of their methods.
  • Memento provides the ability to restore an object to its previous state (undo).
  • Observer is a publish/subscribe pattern which allows a number of observer objects to see an event.
  • State allows an object to alter its behavior when its internal state changes.
  • Strategy allows one of a family of algorithms to be selected on-the-fly at runtime.
  • Template method defines the skeleton of an algorithm as an abstract class, allowing its subclasses to provide concrete behavior.
  • Visitor separates an algorithm from an object structure by moving the hierarchy of methods into one object.

References:

Software design pattern - Wikipedia


Footnotes:

(1) You see how me coming to the world enlightened and inspired all the brilliant minds around the world

· 3 min read

In this post, I'll demonstrate 2 ways to perform custom sorting. It all happened when a client asked to sort a lookup table/dropdown in a specific order. S -> U -> N

Example:

Statuses:

|id |     name      |
|---|:-------------:|
| 1 | Complete (S) |
| 2 |Dropped out (U)|
| 3 |in progress (N)|
| 4 |Lost Funds (U) |

Usually what I'd do is adding a sort_order column to the table in the database and use it when querying the records. As such,

|id |     name      | sort_order |
|---|:-------------:|:---------- |
| 1 | Complete (S) | 1 |
| 2 |Dropped out (U)| 2 |
| 3 |in progress (N)| 4 |
| 4 |Lost Funds (U) | 3 |

and the query would simply be:

    SELECT *
FROM statuses
ORDER_BY sort_order

This is obviosly the cleanest and the recommended way to approach this problem.

However, in this particular case, we weren't at liberty to make changes to the database design. So, we had to discuss performing the sort at the query time with some other custom methods, OR processing it using JS (The application in question is Node.JS/Angular Application).

Solution1: Custom Order using SQL

The first Solution involved Regular expressions to extract the last bits of the names (S), (N), (U) Which are what will be used in the sorting

SELECT *
FROM statuses
ORDER BY
CASE substring(name from '\([A-Z]\)')
WHEN '(S)' THEN 1
WHEN '(U)' THEN 2
WHEN '(N)' THEN 3
ELSE 5
END

Custom Order by processing the data in JS

Given that we already had a specific criteria for the sort (S -> U -> N), I created a variable sorter of those 3 values along with the corresponding desired order. Javascript sort function allows for custom comparing functions to be used in the sorting process. So, we will be creating this comparing function compare(). What this function basically does, is that it uses the same regular expression to extract the bit with the parantheses, For each element it finds the correspondoing order from the sorter variable and uses that as a comparer.

var compare = function(a, b) {
var sorter = [{value: "(S)", order: 1}, {value: "(U)", order: 2}, {value: "(N)", order: 3}]

var a_status = a.name.match(/\([A-Z]\)/g)
var b_status = b.name.match(/\([A-Z]\)/g)

return sorter.find(function(element) { return element.value == a_status[0]; }).order - sorter.find(function(element) { return element.value == b_status[0]; }).order
}

var sorted = release.sort(compare)
console.log(sorted)

Eventually we opted to the second solution. We use an ORM Node Module and making the custom order in sql wouldn't have been as easy to implement.

Confession:

I totally forgot about adding the sort_order Column until the time I wrote this blog. But again we weren't have been able to use it anyway. Cudos to Rubber Ducking!!!

· 2 min read

One small issue I faced while I was working on my portfolio website, was that I wanted the project modal to be able to open each other. For example. I wanted the portal modal to be able to reference all the other applications and API modals that it’s linked to. The problem was a little compound.

do I want to close the current modal before opening the other modal? Or do I want to stack the modals on top of each other?

Approach 1

What I first found as a solution was to make use of the two tags that bootstrap offers for modals data-dismiss="modal" and data-toggle=“modal”.

<a href="#" data-dismiss="modal" data-toggle="modal">This is the link to the other modal</a>

This worked fine. It closes the portal modal and opens the next modal. BUT for some reason, the second modal wouldn’t scroll. And since the modal background was a little transparent, I could see the page behind the modal scrolling 🤔

Approach 2

Another fast an easy solution was to use JQuery to toggle the modal.

<a id="modalTogglerBtn">This is the link to the other modal</a>
$("modalTogglerBtn").on("click", function(){
$('#myModal').modal('toggle');
});

This did the trick. Not really. The new modal would open and it would scroll. So what was the problem? As I close the modal on top of the stack. The same problem of not being able to scroll on the modal would happen to the first modal.(The one down the stack).

Here’s the problem then, something with the modal messes the scrolling up and transfer it from the modals back to the body.

Approach 3 (The Solution) 🎉

So here’s what I found out. Bootstrap, upon the creation, adds a modal-open class to the body tag. This class is what causes the scrolling behavior to be focused in the modals rather than the body itself. As we would close the top modal, bootstrap would remove that class from the body tag. That's why neither of the first approaches worked. So we need a why or a mechanism to check for open modals first before removing that tag.

$(document).on('hidden.bs.modal', function (event) {
if ($('.modal:visible').length) {
$('body').addClass('modal-open');
}
});

The hidden.bs.modal is invoked when the modal is fully hidden (after CSS transitions have completed) Ref.JavaScript · Bootstrap . Modal Events

· 2 min read

So obviously, this blog will be more of a notebook to me than it is an actual Blog. And just so we get things started, Here’s that…

A note before we get started with the steps, The following steps where implemented and tested on a macOS machine. I assume the differences to the other OS will be mainly in the file paths.

Download files from:

you need to have those files on you computer in the HOME directory.

if you decided to place those files elsewhere, make sure to update the the script accordingly.

if [ -f ~/.git-completion.bash ]; then
. ~/.git-completion.bash
fi

if [ -f ~/.git-prompt.sh ]; then
. ~/.git-prompt.sh
fi

WHITE="\[\033[1;37m\]"
MAGENTA="\[\033[0;35m\]"
YELLOW="\[\033[0;33m\]"
BLUE="\[\033[34m\]"
LIGHT_GRAY="\[\033[0;37m\]"
CYAN="\[\033[0;36m\]"
GREEN="\[\033[0;32m\]"
RED="\[\033[0;31m\]"
GIT_PS1_SHOWDIRTYSTATE=true
export LS_OPTIONS='--color=auto'
export CLICOLOR='Yes'
export LSCOLORS=gxfxbEaEBxxEhEhBaDaCaD

export PS1=$LIGHT_GRAY"\W"'$(
if [[ $(__git_ps1) =~ \*\)$ ]]
# a file has been modified but not added
then echo "'$YELLOW'"$(__git_ps1 " (%s)")
elif [[ $(__git_ps1) =~ \+\)$ ]]
# a file has been added, but not commited
then echo "'$MAGENTA'"$(__git_ps1 " (%s)")
# the state is clean, changes are commited
else echo "'$RED'"$(__git_ps1 " (%s)")
fi)'$WHITE"$ "

if you wanted to use different colors other than the ones provided above. Visit this link

once you add that script to ~.bash_profileyou’ll just need to either restart the terminal for the changes to take effect completely or run `. ~.bash_profile` to every session you have open

Finally, I can’t claim authorship to the code presented above, I found this note deep down in my old notes and I think that the starting code to this was found somewhere on StackOverflow. I will share the source as soon as I get a handle of it.

· One min read

I’ve always wanted to write. I may not be a great writer but that’s something that comes with practice. So why not take it one step at a time. I already keep notes and some times journals to document my thinking process or some times my findings. SHARE THAT! (🐥Rubber Ducking 🐥).

So I think this is what this is about. I’m rubber ducking to you guys. Some times, I may find a really cool solution. Most of the time, it’s just a solution that might need a bit of tweaking or even thinking in a different direction.

This is my clutter space, my journals, notebook , and my desk. If someone find their way to here, I would appreciate your feedback, and comments. If I’m thinking in the wrong direction, show me the way 💡