logo
logo
Sign in

Tachyum Delivers First Software Emulation Systems

avatar
AI TechPark
Tachyum

Tachyum™ delivered its first Prodigy Software Emulation Systems to early adopter customers and partners who now have the opportunity to leverage it for evaluation, development and debugging purposes. Delivery of these systems is a key milestone in achieving production-ready status of the world’s first Universal Processor for data center, AI and HPC workloads.

Tachyum’s Prodigy can run HPC applications, convolutional AI, explainable AI, general AI, bio AI, and spiking neural networks, plus normal data center workloads, on a single homogeneous processor platform, using existing standard programming models. Evaluation customers and partners can test their recompiled and native applications and begin porting them to the Prodigy environment.

Pre-built systems include:

  • Prodigy emulator
  • Native Tachyum Linux 5.10
  • Toolchains: GNU GCC 10.2 in both cross and native versions
  • Debugging capabilities: both native GDB and cross-GDB
  • User-mode applications:
    • Web server: Apache
    • SQL server: MariaDB, SQLite
    • Non-SQL server: MongoDB
    • Scripting languages: PHP, Python, Perl, Ruby, Tcl, Non-JIT version of Java Virtual Machine, Git, Svn, Subversion, Sed, Gawk, Grep
  • X86, ARM V8 and RISC-V emulators
  • Scientific libraries: Eigen library, vectorized and tensorized BLAS including GEMM, vectorized and tensorized LAPACK, FTT library, ODE/PDE solvers
  • AI software: PyTorch 1.7.1, TensorFlow 2.0

“We are excited to deliver the first of our software emulation systems to early adopters eager to seamlessly turn their data centers into universal computing centers that deliver industry-leading performance at a significantly lower cost of ownership,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “By deploying these emulation systems, we are one step closer to fulfilling our mandate to bring Prodigy to the market and revolutionize performance, energy consumption, server utilization and space requirements of next-generation cloud environments.”

Without Prodigy, public and private cloud data centers must use a heterogeneous hardware fabric, consisting of CPU, GPU, TPU processors, to address these different workloads, creating inefficiency, expense, and increasing the complexity of supply and maintenance challenges. Using specific hardware dedicated to each type of workload (e.g. data center, AI, HPC), results in underutilization of hardware resources, more challenging programming, software integration & maintenance, as well as increased hardware maintenance challenges. Prodigy’s ability to seamlessly switch among these various workloads dramatically changes the competitive landscape and the economics of data centers.

In public and private cloud data centers, Prodigy significantly improves computational performance, energy consumption, hardware (server) utilization and space requirements, compared to existing processor chips currently provisioned. As the world’s first universal processor, it also runs legacy x86, ARM and RISC-V binaries in addition to its native Prodigy code. With a single, highly efficient processor architecture, Prodigy delivers industry-leading performance across data center, AI, and HPC workloads, outperforming the fastest Xeon processors while consuming 10x lower power (core vs. core), as well as outperforming NVIDIA’s fastest GPU in HPC, as well as AI training and inference. A mere 125 HPC Prodigy racks can deliver 32 tensor EXAFLOPS.

Prodigy’s 3X lower cost per MIPS and its 10X lower core power translate to a 4X lower data center Total Cost of Ownership (TCO), delivering billions of dollars in annual savings to hyperscalers. Since Prodigy is the world’s only processor that can switch between data center, AI and HPC workloads, unused servers can be used as CAPEX- free AI or HPC cloud resources, because the servers have already been amortized. Prodigy will also allow Edge developers for IoT to exploit its low power/high performance, along with its simple programming model, to deliver efficient high- performance AI to the edge.

collect
0
avatar
AI TechPark
guide
Zupyak is the world’s largest content marketing community, with over 400 000 members and 3 million articles. Explore and get your content discovered.
Read more