Charting AI's Course
The branch of AI known as Deep Learning (DL) will most likely continue to change the fields of manufacturing, healthcare, retail, autonomous vehicles, data analytics, and security. However, new expectations and goals will largely influence how we get there.
Overall, AI is rapidly learning to recognize our world's critical details. Some examples are natural language processing (NLP), visual recognition, recommendation systems, and translation. The challenge, however, moving forward is teaching AI to recognize objects and patterns in the outside world and then understand their meaning.
An effective analogy is a language. As humans, we understand our world not just based on the words we use and their definitions but also on the context of those words. For AI to become even more helpful, deep learning is the next leap to understanding the meaning of words as we see them. For example, it will need to move beyond seeing an automobile as not only a machine composed of many parts but also as a human symbol of transportation, freedom, excitement, and occasional danger.
Yoshua Bengio, one of two winners of the 2018 ACM A.M. Turing Award, relates the phase of AI to a recently famous book, Thinking Fast and Slow. In it, sociologist Daniel Kahneman states that humans think using two systems that he describes as systems one and two. System one comprises our immediate, habitual responses, which Bengio says is where AI's progress is currently. System two, however, is the slower, more rational thinking that we can do, and that's where AI needs to go.
Described another way, Leslie Valiant, one of the 2010 ACM A.M. Turing Award winners, believes that for machines to exhibit intelligence, they need to be able to both learn from experience and then develop reasoning based on what they've learned. So far, AI has excelled at finding patterns but has struggled to understand their meaning.
Neuro-symbolic AI Neural networks (or Deep Learning) AI has been successful in various fields where the analysis of large amounts of data is required. For example, they can scan thousands of X-Rays to find new patterns that indicate disease. In this way, they are self-learning systems.
Symbolic reasoning, on the other hand, refers to an AI's ability to understand the meaning of symbols. AI's of this variety can be fed rules by which they interpret instead of merely recognizing patterns. A basic example might be knowing that the appearance of a tumor in one part of the body (like the brain) might have more severe implications than one located elsewhere. But without the data ingestion capabilities of a neural AI, it might take a long time for operators to 'teach' a symbolic AI how to recognize tumors.
Neuro-symbolic AI combines both approaches. In other words, the data processing and deep learning power of deep neural networks combined with the logical reasoning of a symbolic AI engine. As a result, in our above example, the AI could first scan thousands of images to learn human anatomy and how to spot a tumor. Then, based on its learning, it can understand the seriousness of a patient's prognosis and maybe even recommend specific treatment.
Intel's Cognitive Computing Research
While the above example deals heavily with images, Intel's Cognitive Computing Research broadens the possibilities of Neuro-Symbolic AI to language processing, recommendation systems, and other applications. And it could be that many of the best uses for the technology remain yet to be discovered.
Current Intel AI Technologies
Accomplishing more with AI starts with building ever-powerful processors to handle more data and large-capacity high-speed memory to store it.
End-To-End AI Acceleration
Intel Xeon Scalable Processors are a prime example of how Intel is stepping up to take on AI's most demanding workloads. Thanks to Intel Advanced Vector Extensions 512 (Intel AVX-512), their latest processors can securely crunch more numbers faster than ever. Add to that Intel's Deep Learning Boost, which uses Intel AVX 512 to speed up deep learning networks, and you have AIs capable of learning and doing more.
For proof of Intel's contribution to the field of AI, one needn't look any further than the performance of Intel 3rd Gen Xeon Scalable processors on popular AI benchmarks:
- Up to 1.74x higher batch inference throughput
- Up to 1.59x higher INT8 real-time inference throughput
- Up to 4.5X more images per second at INT8 and up to 6x more images per second at BF16 object detection
For the Latest in AI Hardware Expertise, Look to UNICOM Engineering
As an Intel Technology Provider, UNICOM Engineering has driven solutions like HPC and AI with our partners for decades. Our skilled team actively designs solutions based on the latest 3rd Gen Intel Xeon Scalable processors and related technologies. As a result, our customers benefit from solutions optimized for telecom, cloud, enterprise, network, security, IoT, and AI workloads with expanded I/O, storage, and network connectivity options by leveraging our services. Learn more about how UNICOM Engineering can help you transition to next-gen solutions by scheduling a consultation.