Friday, December 27, 2024

Next Generation AI/ML and applications

Solid positive trends for (Artificial Intelligence) AI/ Machine Learning (ML) in the coming years

The AI/ML market size, applications, and investments are trending exponentially and in concert with several other supportive and strongly connected trends in areas such as data science, autonomy, and virtual reality. Let me bring attention to several of such prominent industry and technology trends that are all pointing to one thing, AI/ML is becoming a necessity.

The first trend is pointing to solid exponential growth in big data, causing data explosion due to our growing ability to collect vast amounts of variety of data at lightning speeds. This results in tremendous data sets that exceed our traditional ability to effectively process them. AI/ML is necessary to get the best value of that overwhelming flow of data. In the following years, those who can benefit from such abundant availability of all kinds of data will successfully navigate to the best competitive position.

The second trend is in autonomy. Soon, it will not only be self-driving cars and autonomous vehicles. It will be autonomous aircraft, cranes, restaurants, factories, and battlefields. At home, we today have autonomous vacuum cleaners, lawn mowers, snow blowers, and security systems, and we are moving to more and more. I have recently watched a video of an automobile production facility that shows no single human being in view. That is pointing again to the urgent need for effective and progressive AI/ML in all those applications.

I am sure all of us know of companies and people rushing to create, invest, and purchase products, services and assets in the virtual world that is worth billions of dollars. The real potential of Virtual Reality (VR) can’t be unleashed without revolutionary AI/ML, non-traditional sensing, efficient data handling, and advanced visualization techniques.

Are we ready for this revolution?

We are faced with handling AI/ML and data analytics technology that is still maturing, growing, and moving at a vast rate. Competing on the required skills and resources is imminent. Companies are already lacking skill sets in that field, and feverishly competing to get them. However, many can’t move out of the technology hype phase and inflated unrealistic expectations to better understanding of the state of the art, current capabilities, and limitation. There are many gaps, missing tools, and essential skill sets required to take the field to the next levels.

AI/ML topic has been recently over commercialized and often over sold to what it is, and what it can do. There are many fragmented efforts at various levels of maturity that don’t currently add up to disruptive outcomes. We need to add depth and consistency to move the needle and advance the state of the art, instead of just using currently available tools. It is essential to align forces, efforts, and resources to catch up with the movement and make significant contributions. We also lack the common infra structure and development process and tools to handle that overwhelming need, progression, and chaotic responses to it.

There are gaps in the current AI/ML algorithms, however the significant advancement we made in this field since its beginning in the past century. If we can identify and close those gaps, we will make the progress expected in the viability and relevance of such algorithms and techniques.

What is needed to accelerate our progress?

First and foremost, neural network and machine learning architectures need to advance to more effective ones. If the neural network building components don’t have the correct elements and right genetics, it will not produce the intelligence we all expect. For example, a NN architecture allowing nodes with integrators, differentiators and extensive feedback loops and intra-node connections, instead of straight traditional weights and biases would be required to model system dynamics that are traditionally modelled using sets of differential equations or partial differential equations. If a neural model used with only weights and biases, the training would attempt modeling the data and not the original system. To allow the technique attempt to model the system itself, we must provide the genetics controlling its evolution during learning in the fundamental structure and architecture. This allows an optimization technique to vary basic parameters in the model to learn the data. In such a scheme, much fewer number of input / output pairs will be needed to train a model, such as a neural network, how to adjust itself using the data provided. Such trained system will be a more realistic replica of the original system. Another advantage is provided through limiting the number of degrees of freedom to be adjusted during training. That is obviously a much smaller system dimension than a brute force weights and biases architecture.

In summary, this will allow us to use a neural network model capable of learning a specific dynamic system behavior using a much smaller neural network size and will require much fewer training data pairs to train it. Similarly, various other types of neural network architectures will be required. An example of that is demonstrated in human organs where each organ has the specific genetics of making cells, and building blocks, that can construct the function of that specific organ. For example, heart cells can’t function as liver cells and vice versa. Therefore, each organ needs to have the specific genetics to regenerate cells that function for that organ. By the same logic, neural network architectures would be required to have the building blocks necessary to model a specific system or function in order for it to have the inherent capability to converge to arealistic model of the original system through an appropriate optimization technique.

Another gap is the ability of current machine learning models for cognition versus memorizing. Most of the current neural network models can learn the data but not the concepts in the data. This is except for very generic data characteristics such as learning classes and categories within a data set. Can we provide a machine learning technique capable of reading a paper and summarizing the main concepts and points in that paper? Can that algorithm intelligently answer questions about these concepts?

The ability of most current machine learning techniques to learn of unstructured data is missing. In most of the current techniques, structured, and accurately labelled data is required for training, and similarly structured inputs are required for recall. Can free-form unstructured data available in the public domain be used for producing effective intelligence?

A core to inferring and capturing intelligence from available data is to have the required computing power and architecture. We need high performance, distributed, and ultimately quantum computing to process data at the rates and speeds required. Edge computing is necessary to optimize communication channels. Distributed high performance computing is necessary to leverage the aggregate power of many computing elements. Imagine human brains built of simple computing elements or neurons, but there are 90B of them in one person. The correct infra structure and computation framework and interconnectivity are all foundational.

In general, AI/ML techniques will need to also provide some explainability, and confidence in the decisions made for human users to understand the reasoning of the decision and risk expected if they use it to perform a task. That is a human need in our daily life decision making process. If we coordinate efforts and leverage such prospects to high common goals, we will progress steadily to the AI/ML aspired, and applications desired.

Latest