Cortex Logic solves core business problems in a practical, cost-effective way using all available structured and/or unstructured data and smart technology. The solutions are implemented via the CORTEX AI Engine in an end-to-end, full stack, integrated, scalable, and secure manner. Cortex Logic operationalizes Data Science and AI by following international data science standards and implementing automated analytics within champion-challenger approach, which is typically maintained and kept up-to-date during the deployment and operational phase. The CORTEX AI Engine is used to operationalize solutions such as strategic business transformation & optimization, human capital valuation & employee profiling, intelligent virtual assistants, robo-advisors, process optimization, predictive maintenance, fraud detection, churn prediction, advanced risk scoring, machine learning-based trading, real-time customer insights, smart recommendations and purchase prediction, personalized search, cyber security, medical risk prediction, and precision medicine. See Figure 7 for a list of solutions from the CORTEX AI Library.
Thriving Business in smart technology Era
Figure 8 lists some relevant industries where these AI-based solutions can and are being deployed, whereas Figure 9 shows the mapping of each solution from the CORTEX AI Library to a relevant industry. Figure 10 illustrates the application stack for a typical AI solution.
The Big Data & Analytics methodology is a combination of sequential execution of tasks in certain phases and highly iterative execution steps in certain phases. Because of the scale issue associated with a Big Data & Analytics system, designers must adhere to a pragmatic approach of modifying and expanding their processes gradually across several activities as opposed to designing a system once and all keeping the end state in mind. The main phases are typically as follows:
- Analyze and evaluate business use case
- Develop the business hypothesis
- Develop analytics approach
- Build and prepare data sets
- Select and prepare the analytical models
- Build the production ready system (scale and performance)
- Measure and monitor
This methodology also corresponds with the Data Science methodologies that Cortex Logic implements in operationalizing Data Science and developing AI-based solutions (see Figure 9):
- Cross-Industry Standard for Data Mining (CRISP-DM)
- Analytics Solutions Unified Method for Data Mining/Predictive analytics (ASUM-DM)
The sequence of the CRISP-DM phases is not strict and moving back and forth between different phases is always required. The arrows in the process diagram indicate the most important and frequent dependencies between phases. The outer circle in the diagram symbolizes the cyclic nature of data mining itself. A data mining process continues after a solution has been deployed. The lessons learned during the process can trigger new, often more focused business questions and subsequent data mining processes will benefit from the experiences of previous ones.
This initial phase focuses on understanding the project objectives and requirements from a business perspective, and then converting this knowledge into a data mining problem definition, and a preliminary plan designed to achieve the objectives. A decision model, especially one built using the Decision Model and Notation standard can be used.
The data understanding phase starts with an initial data collection and proceeds with activities in order to get familiar with the data, to identify data quality problems, to discover first insights into the data, or to detect interesting subsets to form hypotheses for hidden information.
The data preparation phase covers all activities to construct the final dataset (data that will be fed into the modeling tool(s)) from the initial raw data. Data preparation tasks are likely to be performed multiple times, and not in any prescribed order. Tasks include table, record, and attribute selection as well as transformation and cleaning of data for modelling tools.
In this phase, various modeling techniques are selected and applied, and their parameters are calibrated to optimal values. Typically, there are several techniques for the same data mining problem type. Some techniques have specific requirements on the form of data. Therefore, stepping back to the data preparation phase is often needed.
At this stage in the project you have built a model (or models) that appears to have high quality, from a data analysis perspective. Before proceeding to final deployment of the model, it is important to more thoroughly evaluate the model, and review the steps executed to construct the model, to be certain it properly achieves the business objectives. A key objective is to determine if there is some important business issue that has not been sufficiently considered. At the end of this phase, a decision on the use of the data mining results should be reached.
Creation of the model is generally not the end of the project. Even if the purpose of the model is to increase knowledge of the data, the knowledge gained will need to be organized and presented in a way that is useful to the customer. Depending on the requirements, the deployment phase can be as simple as generating a report or as complex as implementing a repeatable data scoring (e.g. segment allocation) or data mining process. In many cases it will be the customer, not the data analyst, who will carry out the deployment steps. Even if the analyst deploys the model it is important for the customer to understand up front the actions which will need to be carried out in order to actually make use of the created models.
Figure 12 illustrates the spectrum of analytics used in CORTEX AI Solutions, whereas Figure 13 shows a sample of cutting-edge Smart Technologies being used in CORTEX AI Solutions.