THE HYPERSCALE DATA DISTRIBUTION SOFTWARE PLATFORM

A software engine
for moving data at scale and speed

About Zettar

Zettar Inc. delivers a GA-grade, scale-out, petascale-proven, all-in-one, multi-100+Gbps capable hyperscale data distribution software solution. The world faces exponential data growth. As a result, modern IT needs a new pillar of support: data movement at scale and speed, in addition to the familiar three: storage, computing, and networking. Zettar aims to be the foundation of this fourth pillar.

Unlike all other commercial data transfer software, all of which belong to the legacy (decade+ old) Managed File Transfer (MFT) category, Zettar zx has a ground-breaking design capable of comfortably tackling the aforementioned exponential data growth. It establishes the brand new "hyperscale data distribution" category for modern digital businesses. In March 2019, Zettar won the Supercomputing Asia 2019 (SCA19) Data Mover Challenge (DMC), a grueling 2-month long international competition at the highest level. The Zettar team beat out six other elite national teams from the U.S. and Japan by a wide margin.

Zettar is a fully revenue-supported startup in Palo Alto, California. It has also been awarded grants by both the U.S. National Science Foundation (NSF) and the Department of Energy (DOE) Office of Science.

Zettar zx is the world's only data transfer software whose performance is unaffected by distance, encryption, and checksumming. This is a Holy Grail that numerous parties, both academic and commercial, have tried hard to find for decades, but all have failed. The tremendous value to the business efficiency of all distributed data-intensive enterprises should be evident.

Zettar has deployed its solution at a large global biopharmaceutical company, an European Oil & Gas major, a US DOE Exascale Computing Project (ECP) project, a hyperscale Web property, a well-known research university, and more. Many use cases involve the moving of multiple petabytes of data over distance.

May 2017 Zettar Moves Petabyte Datasets at Unprecedented Speed via ESnet

Each end only employs a modest two-node cluster consisting of two inexpensive 1U commodity servers with 4x10Gbps unbonded Ethernet ports (thus 2 x 4 x10Gbps = 80Gbps - the bandwidth cap)

September 2019 ESnet's Network, Software Help SLAC Researchers in Record-Setting Transfer of 1 Petabyte of Data

With the same data transfer nodes as the 2017 trial, and the 4 AIC SB122A-PH 1U 10-bay storage servers updated with Intel E5-2699v4 CPUs along with 16 Intel Optane P4800X 375GB SSDs, Zettar transferred, with encryption and checksumming, one petabyte of data in just 29 hours over the same connection. The average transfer rate is 75Gbps, or 94% utilization of the available bandwidth.

The May 2017 trial results visualized

Zettar zx generated 1/3 plus of ESnet traffic

The September 2018 trial results visualized

Zettar zx generated 1/3 plus of ESnet traffic again

News

  1. Chin Fang, Is A Science DMZ The Key To Solving Poor Data Utilization?, Bio-IT World, Contributed commentary, Apr 16, 2019
  2. ESnet News & Publications, ESnet’s Networking Prowess on Display at Singapore Conference, April 8, 2019
  3. Supercomputing Asia 2019, Data Mover Challenger Winners Announced!, National Supercomputing Centre, Singapore, Mar 13, 2019
  4. Thomas Coughlin, Wicked Fast Data Transport, Frobes, Feb 11, 2019
  5. Press release, BeeGFS based burst buffer enables world record hyperscale data distribution, Oct 25, 2018
  6. Machine Design, DoE Tests Newest Information Superhighway, October 23, 2018
  7. ESnet News & Publications, ESnet's Network, Software Help SLAC Researchers in Record-Setting Transfer of 1 Petabyte of Data, OCTOBER 17, 2018
  8. InsideHPC article, Big Data over Big Distance: Zettar Moves a Petabyte over 5000 Miles in 29 Hours, October 5, 2018
  9. Press release, Zettar Transferred, with Encryption, One Petabyte of Data in Just 29 Hours Using AIC Servers, October 04, 2018
  10. Chin Fang, Dealing With Fast Growing Data With Hyperscale Data Distribution, Bio-IT World, Contributed commentary, August 17, 2018

Contact Us