Right this moment, d-Matrix, a pacesetter in high-efficiency AI-compute and inference processors, introduced Jayhawk, an Open Area-Particular Structure (ODSA) Bunch of Wires (BoW) based mostly chiplet platform for power environment friendly die-die connectivity over natural substrates. Constructing on the again of the Nighthawk chiplet platform launched in 2021, the 2nd technology Jayhawk silicon platform additional builds the scale-out chiplet based mostly inference compute platform. d-Matrix clients shall be in a position to use the inference compute platforms to handle Generative AI functions and Giant Language Mannequin transformer functions with a 10-20X enchancment in efficiency. 

Giant transformer fashions are creating new calls for for AI inference on the similar time that reminiscence and power necessities are hitting bodily limits. d-Matrix gives one of many first Digital In-Reminiscence Compute (DIMC) based mostly inference compute platforms to come to market, reworking the economics of complicated transformers and Generative AI with a scalable platform constructed to deal with the immense knowledge and energy necessities of inference AI. Enhancing efficiency could make energy-hungry knowledge facilities extra environment friendly whereas decreasing latency for finish customers in AI functions.

“With the announcement of our 2nd generation chiplet platform, Jayhawk, and a track record of execution, we are establishing our leadership in the chiplet ecosystem,” stated Sid Sheth, CEO of d-Matrix. “The d-Matrix team has made great progress towards building the world’s first in-memory computing platform with a chiplet-based architecture targeted for power hungry and latency sensitive demands of generative AI.”

d-Matrix’s novel compute platform makes use of an ingenious mixture of an in-memory compute-based IC structure, subtle instruments that combine with main ANN fashions, and chiplets in a block grid formation to assist scalability and effectivity for demanding ML workloads. By utilizing a modular chiplet-based method, knowledge heart clients can refresh compute platforms on a a lot quicker cadence utilizing a pre-validated chiplet structure. To allow this, d-Matrix plans to construct chiplets based mostly on each BoW and UCIe based mostly interconnects to allow a really heterogeneous computing platform that may accommodate third social gathering chiplets.

“d-Matrix has moved quickly to seize the chiplet opportunity,  which should give them a first-mover advantage,” stated Karl Freund, Founder and Principal Analyst at Cambrian-AI Analysis. “Anyone looking to add an AI accelerator to their SoC design would do well to investigate this new approach for efficient AI.”

The Jayhawk chiplet platform options:

  • 3mm, 15mm, 25 mm hint lengths on natural substrate
  • 16 Gbps/wire excessive bandwidth throughput
  • 6-nm TSMC course of know-how
  • <0.5 pJ/bit power effectivity

Jayhawk is at present obtainable for demos and analysis. d-Matrix shall be showcasing the Jayhawk platform on the Chiplet Summit Jan 24-26 in San Jose, CA

Enroll for the free insideBIGDATA e-newsletter.

Be a part of us on Twitter: https://twitter.com/InsideBigData1

Be a part of us on LinkedIn: https://www.linkedin.com/firm/insidebigdata/

Be a part of us on Fb: https://www.fb.com/insideBIGDATANOW

What's Your Reaction?

hate hate
confused confused
fail fail
fun fun
geeky geeky
love love
lol lol
omg omg
win win
The Obsessed Guy
Hi, I'm The Obsessed Guy and I am passionate about artificial intelligence. I have spent years studying and working in the field, and I am fascinated by the potential of machine learning, deep learning, and natural language processing. I love exploring how these technologies are being used to solve real-world problems and am always eager to learn more. In my spare time, you can find me tinkering with neural networks and reading about the latest AI research.


Your email address will not be published. Required fields are marked *