19 November 2024
Artificial Intelligence AI chips

Top companies AI chips 2023

Alphabet AI chip

Google’s parent company takes the lead in developing artificial intelligence technology across various industries, such as cloud computing, data centers, mobile devices, and desktop computers.

One prominent feature is the Tensor Processing Unit (TPU), a specialized application-specific integrated circuit (ASIC) specifically designed for Google’s TensorFlow programming framework. This framework is primarily utilized in the fields of machine learning and deep learning, both integral parts of AI.

The Edge TPU, on the other hand, caters to “edge” devices found at the network’s periphery, including smartphones, tablets, and other equipment used outside of data centers. Unlike the Cloud TPU, which is a credit card-sized data center or cloud solution, the Edge TPU is smaller than a one-cent coin, making it compact and suitable for edge computing applications.

Apple AI chip

For a while now, Apple has been developing its own semiconductors, and there is a possibility that it may eventually reduce its reliance on vendors like Intel. This potential shift would represent a significant change in focus for the company. Apple has shown a clear intention to chart its own course in the field of artificial intelligence, as evidenced by its previous separation from Qualcomm following a lengthy legal dispute.

Apple’s latest iPhones and iPads are equipped with the A11 and A12 “Bionic” CPUs. These chips have been claimed to deliver improved performance and power efficiency. The A12 Bionic chip, in particular, is said to consume 50% less power compared to its predecessor while being 15% faster. It incorporates Apple’s Neural Engine, a component of the circuitry that is not accessible to apps developed by other parties.

Arm AI chip

Arm, or ARM Holdings, provides semiconductor designs that are utilized by all major technology companies, including Apple. Its advantage lies in being a semiconductor designer rather than a chip manufacturer, similar to how Microsoft benefited from not manufacturing its own computers. Consequently, Arm wields significant influence in the market.

The company is currently focused on developing AI chip designs in three primary areas. Project Trillium involves the creation of a new family of scalable and highly efficient processors specifically designed for machine learning. Additionally, Arm NN is a processor tailored to work seamlessly with machine learning frameworks like TensorFlow and Caffe, enabling efficient processing of deep learning applications. These initiatives exemplify Arm’s commitment to advancing machine learning technologies.

Intel AI chip

In 2017, reports emerged claiming that the world’s largest chipmaker at the time was generating $1 billion in revenue from the sale of AI chips. While Intel may no longer hold the title of the biggest chipmaker today, it certainly did back then.

The specific processors discussed in that report were from Intel’s Xeon family, which were not originally designed explicitly for AI but were instead general-purpose processors that had undergone enhancements. In addition to refining the Xeon line, Intel has also developed a series of AI chips called “Nervana,” which are known as “neural network processors.” These advancements demonstrate Intel’s commitment to both enhancing its existing processors and introducing specialized chips for AI applications.

Nvidia AI chip

Nvidia is currently leading the market for GPUs, which, as mentioned earlier, excel in executing AI workloads faster than general-purpose computers. Furthermore, the company seems to have gained an advantage in the emerging AI processor market.

There is a strong correlation between the two technologies, as Nvidia’s advancements in GPU technology have propelled the development of its AI processors. In fact, Nvidia’s AI products heavily rely on GPUs, and its chipsets can be considered as AI accelerators.

Nvidia offers a range of AI chip technologies, such as the Tesla chipset, Volta, and Xavier. These chipsets, based on GPUs, are packaged with software to cater to specific needs and requirements in the AI domain.

Advanced Micro Devices AI chip

Like Nvidia, AMD is a chip manufacturer that has strong ties to graphics cards and GPUs. This association stems from the growth of the computer gaming industry over the past few decades and the recent surge in bitcoin mining.

In the realm of machine learning and deep learning, AMD offers comprehensive hardware and software solutions, including EPYC CPUs and Radeon Instinct GPUs. Radeon is a graphics processor primarily designed for gamers, while EPYC serves as AMD’s processor for servers, particularly in data centers. Additionally, AMD produces other notable chips such as Ryzen and the widely recognized Athlon.

Baidu AI chip

Baidu, often referred to as the “Google of China,” has expanded beyond its role as an internet search engine and ventured into exciting new industries, including driverless automobiles. These emerging sectors demand robust microprocessors and AI chips. In line with this, Baidu introduced the Kun Lun chip, described as a “cloud-to-edge AI chip,” last year. This chip aims to meet the specific requirements of powering AI applications from cloud-based systems to edge devices.

Graphcore AI chip

Let’s explore Graphcore, a pioneering company focused on developing and delivering AI chips, distinguishing itself from the aforementioned seven established organizations not primarily dedicated to AI chip production.

Currently, Graphcore’s flagship offering is the Rackscale IPU-Pod, leveraging their Colossus processor designed specifically for data centers. However, with substantial investments pouring in, the company’s future holds promise for further advancements and expansions. Renowned brands like BMW and Microsoft have collectively invested $300 million in Graphcore, elevating its valuation to over $2 billion.

In this context, the term “IPU” refers to an intelligent processing unit, highlighting Graphcore’s dedication to creating cutting-edge AI processing technology.

Qualcomm AI chip

Qualcomm finds itself feeling abandoned by Apple’s decision to no longer procure its processors, considering Apple’s historical significance as a major revenue source during the smartphone boom. However, Qualcomm remains a prominent player in the industry and has made significant investments with a forward-looking perspective.

Although Qualcomm entered the AI chip market relatively later, industry analysts recognize the company’s extensive expertise in the mobile market. Leveraging this experience, Qualcomm aims to achieve its stated objective of making on-device AI accessible and pervasive.

Adapteva AI chip

Adapteva stands out as one of the intriguing companies on this list, mainly due to its Parallella offering, which is often hailed as the most affordable supercomputing solution available. At the core of Adapteva’s product line is the Epiphany, a 64-bit microprocessor with an impressive 1024 cores, touted as a groundbreaking achievement in the industry.

Adapteva has achieved significant milestones in its journey, securing over $10 million in funding, including support from Darpa, and successfully launching a Kickstarter campaign for its Parallella product. These achievements highlight the company’s ability to attract support and generate interest in its innovative AI chip solutions.

Mythic AI chip

https://mythic.ai/

With over $40 million in funding, Mythic is poised to bring its “AI Without Borders” vision to fruition, starting with data centers. The company’s unique approach involves performing mixed digital and analog computations within flash arrays, which it touts as a revolutionary methodology. This enables Mythic to break free from the conventional limitations of local AI heavily reliant on deep neural networks.

Thanks to its compact size and high-speed GPU, Mythic’s solution offers the power of “massive parallel processing” while maintaining a lightweight profile. This combination of portability and computational capability positions Mythic as a promising player in the field of AI.

Samsung AI chip

Samsung, now surpassing Apple as the top smartphone producer and Intel as the world’s largest chipmaker, is venturing into new and unexplored territories.

The latest version of Samsung’s Exynos microprocessor, designed for LTE (long-term evolution) communication networks, was launched in late last year. Notably, Samsung has enhanced the on-device neural processing units in the new Exynos, signaling their commitment to advancing AI capabilities.

Taiwan Semiconductor AI chip

While TSMC has been a significant semiconductor supplier for Apple, the company maintains a relatively low profile. Despite having a website and sharing updates with investors, it doesn’t disclose much about its actual operations.

Fortunately, news outlets such as DigiTimes keep a close eye on the activities of chipmakers. It was recently reported that Alibaba, the e-commerce giant, has enlisted the services of TSMC and Global Unichip to develop an AI chip. This collaboration highlights the growing demand for advanced AI technology in various industries.

HiSilicon AI chip

This division belongs to Huawei, a telecommunications equipment manufacturer that is currently facing trade restrictions. Due to these restrictions, Huawei is no longer able to operate in the US, and other European countries are following suit.

However, it appears that HiSilicon’s AI chip technology is still in its early stages. To overcome the growing supply restrictions imposed on Huawei, the company will need to intensify its efforts and seek alternative strategies. Adapting to these challenges will be crucial for HiSilicon’s future success in the AI chip market.

IBM AI chip

No comprehensive list of this nature would be without mentioning IBM. As expected, IBM invests heavily in research and development across various AI-related technologies. While IBM’s well-known Watson AI relies on conventional processors rather than AI-specific chips, they still deliver impressive performance.

IBM’s TrueNorth chip falls into the category of specialized AI chips. With 5.4 billion transistors, TrueNorth is considered a “neuromorphic chip” designed to mimic the human brain. Although this may seem substantial, it pales in comparison to AMD’s Epyc chip, which boasts 19.2 billion transistors.

Xilinx AI chip

Xilinx takes the lead in transistor count with its microprocessors. Their Versal or Everest chipsets boast an impressive 50 billion transistors. Xilinx specifically labels Versal as an AI inference platform, emphasizing its role in drawing conclusions from vast amounts of data processed by machine learning and deep learning systems.

While the complete Versal and Everest solutions incorporate chips from other manufacturers, Xilinx is among the pioneers in delivering standalone packages with such remarkable computing power.

Via AI chip

https://www.viatech.com/en/

While Via doesn’t offer a dedicated AI chip, it does offer an “Edge AI Developer Kit” that features a Qualcomm processor and various other components. This allows us to highlight a different type of company.

It’s only a matter of time before AI is integrated into the offerings of other affordable and compact computer manufacturers like Arduino and Raspberry Pi. In fact, some of them already include an AI chip. According to Geek, Pine64 is one such example.

LG AI chip

LG, a prominent consumer electronics manufacturer, has shown its agility by venturing into robotics and anticipating the rise of smart homes and intelligent machinery.

In a recent announcement, LG introduced its proprietary AI chip called the LG Neural Engine. The company aims to leverage this chip to fast-track the development of AI-enabled devices for homes. It is worth noting that LG may also utilize these chips in its data centers and backend systems before deploying them to edge devices.

Imagination Technologies AI chip

Virtual and augmented reality applications demand significant computational power, surpassing most other applications in terms of requirements. In fact, there were reports of Google’s data center servers experiencing slowdowns during the global phenomenon of the augmented reality game Pokémon.

To meet the demands of VR and AR, it is crucial to integrate AI chips both in the data center and edge devices. Imagination, with its PowerVR GPU, makes strides in addressing this need as well.

SambaNova AI chip

https://sambanova.ai/

Backed by over $200 million in funding, this company possesses the resources to design custom AI chips for its clients. SambaNova, despite being in its early stages, asserts that it is developing integrated hardware and software solutions to propel the future of AI computing.

Notably, Alphabet (Google) is among the key investors in this venture. It is common to observe prominent and established corporations investing in innovative startups as a preventive measure against potential disruptions.

Groq AI chip

Founded by former Google employees, including individuals who contributed to the Tensor project, this low-profile startup has made significant strides. In the previous year, it secured $60 million in funding to further its vision, based on the belief that the next major leap in computation will arise from a novel and efficient architectural approach to both hardware and software, as described by the startup itself.

Kalray AI chip

https://www.kalrayinc.com/

Previously featured on Robotics and Automation News, we had the opportunity to interview a senior executive from Kalray, a well-funded European company at the forefront of semiconductor development for AI processing. Kalray’s innovative approach allows for concurrent computation of multiple neural network layers while maintaining low power consumption, making it suitable for both data centers and edge devices.

Amazon AI chip

Considering Amazon’s significant contribution to the cloud computing market through Amazon Web Services, it is a logical step for the company to venture into the AI chip market, particularly to enhance the efficiency of its data centers.

In late last year, the world’s largest online retailer unveiled its AWS Inferentia AI processor. Although it is unlikely to be commercially available to external companies, it will be exclusively offered to entities within the Amazon group.

Cerebras Systems AI chip

https://www.cerebras.net/

Cerebras Systems, founded in 2015, made headlines in April 2021 with the unveiling of their Cerebras WSE-2 AI chip. This impressive chip boasts 850,000 cores and 2.6 trillion transistors, surpassing the performance of its predecessor, the WSE-1, which had 400,000 processing cores and 1.2 trillion transistors.

The effectiveness of Cerebras’ WSE-1 technology, which has accelerated genetic and genomic research, has led to its adoption by various pharmaceutical companies, including AstraZeneca and GlaxoSmithKline.

Hailo AI chip

Hailo, an AI chipmaker based in Israel, has developed a specialized AI processor that empowers edge devices with the computational power of data center-class computers. This innovative AI processor from Hailo reimagines traditional computer architecture, enabling smart devices to perform complex deep learning tasks such as object detection and segmentation in real time, while minimizing power consumption, space requirements, and costs. The deep learning processor is designed to seamlessly integrate into various intelligent machines and devices, making a significant impact across industries including automotive, industry 4.0, smart cities, smart homes, and retail. Hailo offers high-performance AI acceleration modules, namely the Hailo-8TM M.2 and Mini PCIe, to support these applications.

Anari AI chip

https://anari.ai/

Anari AI is revolutionizing the AI hardware sector by introducing a novel approach to chip design and utilization. With a focus on reconfigurable AI, Anari AI empowers users to swiftly build and deploy customized solutions, allowing them to optimize their infrastructure with ease. At the forefront of this platform is the ThorX processor, the inaugural offering from Anari. The ThorX processor boasts an impressive 100x increase in computing efficiency compared to GPUs when working with 3D/Graph data structures. This breakthrough technology opens up new possibilities for efficient and effective AI processing.