The evolution of Artificial Intelligence will bring about huge changes in IT infrastructure
by Ready For AI · August 11, 2018
As artificial intelligence continues to evolve, it is likely to have a significant impact on IT architecture. The integration of AI into existing systems and applications will require changes to be made in infrastructure, software, and data management. AI will also create new opportunities for developing more intelligent systems and applications, resulting in a shift towards decentralized computing and distributed systems. Additionally, AI's ability to automate tasks and improve efficiency will lead to a greater focus on data-driven decision-making and the use of real-time data analytics. Overall, the evolution of AI is likely to lead to a fundamental shift in the way IT architecture is designed and implemented, paving the way for more intelligent, efficient, and innovative systems.
The era of artificial intelligence has arrived
The importance of artificial intelligence cannot be overstated. It is undeniable that we are currently in the era of artificial intelligence. The advancements in big data, high-capacity storage, high-performance computing, and various machine learning algorithms have led to groundbreaking applications that were previously unimaginable.
One example of AI’s superiority over humans is in complex strategy games like Go. In addition, AI-driven applications such as image recognition and speech recognition have become increasingly vital. The popularity of voice intelligent assistants is a testament to this. The commercial use of self-driving cars is yet another indication that we are truly entering the age of artificial intelligence.
Artificial intelligence requires higher IT infrastructure
In the early days of computer technology, only assembly language experts, compiler experts, and operating system experts were able to develop simple applications. Similarly, with artificial intelligence, only experts with fields such as statistical or distributed systems know how to develop AI systems and deploy them on a large scale, which lacks mass-oriented abstraction tools that can accelerate development.
The rapid development of machine learning techniques has far outpaced the corresponding IT infrastructure. The interdependence between applications and infrastructure is subtle, with each one both limiting and reinforcing the other. As applications continue to improve and evolve, they demand more from the underlying resources, driving innovation in infrastructure. Conversely, innovation and cost reduction in infrastructure will lead to disruptive development in applications, providing users with unparalleled experiences. For instance, the shift from traditional slides to PowerPoint to online photo sharing platforms like Pinterest exemplifies how everything has evolved drastically.
Evolution of IT infrastructure
At the beginning of this century, the rapid development of the commercial internet depended on Intel’s x86 instruction set, standardized operating systems from Microsoft, relational databases from Oracle, Ethernet devices from Cisco, and networked storage devices from EMC. Amazon, eBay, Yahoo, and even the original versions of Google and Facebook are also based on these infrastructures. This is what we call the first stage of the internet.
With the gradual maturity of the network, the total number of people in the network increased from 16 million in 1995 to more than 3 billion at the end of 2015. Along with this, application requirements for scale and performance also increased significantly. The technology of the “client/server” era is no longer suitable for the needs of the internet giants, both in terms of technical feasibility and cost performance.
Therefore, internet companies began to find another way. With its technical expertise and academic progress, Google, Facebook, and Amazon defined a new class of infrastructure. Such infrastructure has the following characteristics: scale scalable, programmable, usually open source, and low unit cost. A series of platforms or tools including Linux, KVM, Xen, Docker, Kubernetes, Mesos, MySQL, MongoDB, Kafka, Hadoop, and Spark have formed a significant era of cloud computing, which we call the second stage.
How the future infrastructure will evolve?
It is clear that people no longer simply want to enjoy the animals and plants they cultivate, but they want to process them into more delicious food. The solution to this is to find a high-level chef or a corresponding recipe, which I agree with.
Regarding the need for artificial intelligence in IT architecture, the next-generation architecture will be completely different from the previous common architecture, and it should be a more targeted and distinctive architecture. Its purpose is no longer just to provide enough food, but to provide food that meets individual tastes and requirements, just like a dish that reflects a high-level chef’s understanding of food.
However, the generation of artificial intelligence applications requires a large amount of data and consumes significant computing resources. These limitations make it challenging for artificial intelligence to be generalized, which is in line with the computational paradigm proposed by John von Neumann. Therefore, artificial intelligence represents a fundamentally new architecture that requires us to rethink infrastructure, tools, and development practices to make it more efficient and effective.
Therefore, artificial intelligence represents a fundamentally new architecture that requires us to rethink infrastructure, tools, and development practices. Changes in IT architecture are already happening:
- Integrating specialized hardware optimized for high-concurrency numerical calculations, multiple compute cores, and high bandwidth storage is on the rise. These chips are particularly suitable for fast, low-precision, floating-point computing tasks required by neural networks.
- The system software is designed to optimize the hardware configuration and artificial intelligence algorithms to fully leverage the computing power of each transistor.
- Distributed computing frameworks can effectively scale model operations across multiple nodes, whether used for training or reasoning.
- New data and metadata management systems provide reliable, unified, and replicable data storage capabilities that enable the management of large datasets for training and forecasting.
- Ultra-low latency network equipment and other infrastructure, based on real-time data and content, enable machines to perform intelligent operations quickly.
- End-to-end platforms encapsulate and abstract the entire artificial intelligence workflow, simplifying the complexity faced by terminal developers. Examples of such platforms include Uber's Michelangelo, Facebook's FBLearner, and Determined AI, which is already commercially available.