NewsSusanne Silberhorn2018-08-01T10:45:18+00:00

News

  • August 1, 2018  –  Exclusive Platinum Partnership with Advanced HPC
    Advanced HPC, a leading HPC specialist and solutions provider, announced today that it has become the first and only U.S. Platinum Partner of BeeGFS, the globally-renowned parallel cluster file system. By achieving Platinum status for BeeGFS, Advanced HPC is able to offer the parallel file system with highly competitive pricing and best-in-class support. Additionally, Advanced HPC’s extensive history, expertise and training with BeeGFS enables the company to deliver wholly unique, customized solutions.

  • June 6, 2018 – Bright and BeeGFS Share Plans for TERATEC 2018 Forum
    ThinkParQ, the company behind the leading parallel file system BeeGFS, will co-exhibit with Bright Computing, a global leader in cluster and cloud infrastructure automation software, at the TERATEC Forum (booth #6), taking place June 19 and 20, at Ecole Polytechnique in France. The partnership between these two companys enables organizations to leverage the full performance of their available hardware while ensuring all components within the HPC environment are easy to manage. 

  • May 30, 2018  – BeeGFSv7 generally available
    With the immediate availability of the major release of BeeGFS version 7 we’re introducing storage pools as an innovative approach to leverage different types of storage targets. This enables customers to get highest performance access when needed by also having access to high capacity storage within the same namespace. BeeGFS version 7 also includes modification event logging as well as performance and load monitoring capabilities. 1

  • March 27, 2018  – ThinkParQ teams up with Nyriad for GPU-accelerated Storage:
    At NVIDIA GTC in San Jose, Nyriad and ThinkParQ announced a partnership to develop a certification program for high performance, resilient storage systems that combine the BeeGFS parallel file system with NSULATE, Nyriad’s solution for GPU-accelerated storage-processing.

  • March 8, 2018  – TUK in Germany installs NEC LX Supercomputer with Intel Omni-Path:
    NEC Deutschland GmbH has delivered an LX series supercomputer to Technische Universität Kaiserslautern (TUK), one of Germany’s leading Universities of Technology. The storage solution is based on the widely deployed BeeGFS parallel file system made in Germany.

  • December 8, 2017 – Quanta Cloud Technology präsentiert QxSmart High-Performance Computing/Deep Learning:
    QCT kooperiert mit ThinkParQ bei der Schaffung skalierbarer Lösungen für HPC-Cluster, High-Performance Anwendungen und datenintensive Analysen.

  • December 7, 2017 – System Fabric Works and ThinkParQ Partner for Parallel File System:
    Today System Fabric Works announces its support and integration of the BeeGFS file system with the latest NetApp E-Series All Flash and HDD storage systems which makes BeeGFS available on the family of NetApp E-Series Hyperscale Storage products as part of System Fabric Work’s (SFW) Converged Infrastructure solutions for high-performance Enterprise Computing, Data Analytics and Machine Learning.

  • November 13, 2017 – BeeGFS v7.0 Release Candidate with Storage Pools: 
    The new BeeGFS version 7 introduces an innovative new concept to take advantage of SSDs: System administrators can create pools of different types of storage targets and users can select to assign certain data (e.g. the active project directory) to the flash pool and move it to a different pool later when they are done, or certain data can be kept on the flash pool forever – all in the same namespace. This enables BeeGFS to easily deliver all-flash performance for the current project and at the same time take advantage of the cost-effective capacity of HDDs, unlocking a level of performance that cannot be achieved with transparent SSD caching solutions.

  • November 13, 2017 – QCT Joins Hands with ThinkParQ to Embrace I/O Intensive HPC Clusters:
    Quanta Cloud Computing (QCT), showcases its QxSmart High Performance Computing (HPC)/ Deep Learning (DL) Solution with ThinkParQ BeeGFS®, the latest member in their portfolio of software-defined solutions at SuperComputing 2017. QCT has been a long-time supplier of HPC over the years, and QxSmart HPC/DL is the culmination of the company’s latest research and development.

  • October 10, 2017 – Japan likes BeeGFS:
    After the Japanese hybrid AI & HPC supercomputer Tsubame 3.0 (built by SGI/HPE), which went into production earlier this year using BeeGFS-on-Demand (BeeOND) on the compute nodes for a 1PB NVMe burst buffer at 1TB/s, we just learned that the new ABCI system at Japan’s National Institute of Advanced Industrial Science and Technology (which is going to be Japan’s largest supercomputer in 2018) will also be using BeeOND.

  • August 31, 2017 –  Ace Computers Teams with BeeGFS for HPC: Ace Computers and BeeGFS have teamed to deliver a complete parallel file system solving storage access speed issues that slow down even the fastest supercomputers. BeeGFS eliminates the gap between compute speed and the limited speed of storage access for these clusters–stalling on disk access while reading input data or writing the intermediate or final simulation results.
  • August 23, 2017 –  NetProject starts offering BeeGFS storage appliances in Russia and CIS: NetProject Ural LLC from Moscow, RUSSIA, a Russian system integrator with a strong focus on the oil & gas industry and ThinkParQ, the company behind BeeGFS, announce their partnership for high-performance storage solutions in Russia and CIS.
  • August 08, 2017 – BeeGFS Version 7 beta 1: Storage Pools: The new BeeGFS version 7 introduces an innovative new concept to take advantage of SSDs: System administrators can create pools of different types of storage targets and users can select to assign certain data (e.g. the active project directory) to the flash pool and move it to a different pool later when they are done, or certain data can be kept on the flash pool forever – all in the same namespace.
  • June 23, 2017 – Impressions from the ISC 2017 Frankfurt, Germany: Three days full of fun and HPC have come to an end. Many thanks to ISC High Performance -The HPC Event and our partners for their support, interesting talks and new opportunities.
  • June 19, 2017 – BeeGFS joins EOFS: We are excited to announce that ThinkParQ with BeeGFS has joined the European Open File System (EOFS) organization!
  • June 16, 2017 – OpenHPC BeeGFS integration: x86 & ARM: We are happy to announce that BeeGFS has now been integrated in OpenHPC v1.3.1, a compilation of many useful tools that are needed in most HPC Systems, which is not only available for x86, but also for ARM.
  • June 16, 2017 – Penguin Computing FrostByte adds BeeGFS Storage: Today Penguin Computing announced FrostByte with ThinkParQ BeeGFS, the latest member of the family of software-defined storage solutions. FrostByte is Penguin Computing’s scalable storage solution for HPC clusters, high-performance enterprise applications and data intensive analytics.
  • May 22, 2017 – Bright Cluster Manager adds BeeGFS: We are very happy to announce that BeeGFS setup and monitoring with live statistics has now been integrated into the new Bright Cluster Manager 8.0!
  • April 24, 2017 – Intel Shuts Down Lustre File System Business: According to the news Intel is getting out of the business of trying to make money with a commercially supported release of the Lustre parallel file system – BeeGFS is there for you and keeps yor data staying alive!
  • April 18, 2017 – TSUBAME 3.0: BeeGFS will be used to power a 1 petabyte burst buffer on the NVMe drives of the Tsubame 3.0 – one of the fastest supercomputers in the world with special focus on hybrid AI & HPC, delivered by our partner SGI/HPE.
  • April 03, 2017 – Fujitsu HPC-DA Data Analytics Appliance: Big Data meets High Performance Computing. In this slide set Fujitsu HPC presents an innovative hybrid data analytics and HPC appliance with BeeGFS.
  • March 20, 2017 – BeeGFS as the Hadoop File System: The BeeGFS Hadoop connector enables Hadoop applications to use BeeGFS instead of HDFS.
  • March 17, 2017 – Parallel File Systems for HPC Storage on Azure: High-level overview of how BeeGFS can improve the I/O performance of Azure-based HPC solutions.
  • March, 2017 – Launch of our Facebook pages: We are excited to announce the launch of our two new Facebook pages! ThinkParQ Facebooke Page BeeGFS Facebook Page
  • February 24, 2017 – Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures: Very interesting article about the DEEP & DEEP-ER exascale projects, where BeeGFS is one of the proud members.
  • January 30, 2017 – BeeGFS install guide now available in japanese at chaperone.jp: Thanks to our friends from chaperone.jp for creating such a nice BeeGFS install guide.
  • Dec 22, 2016 – AWI Uses New Cray Cluster for Earth Sciences and Bioinformatics: The storage of the new Ollie cluster (delivered by Cray in cooperation with Megware) at the Alfred-Wegener-Institute in Germany is powered by BeeGFS and uses BeeOND for a burst buffer with 160GB/s throughput.
  • Dec 14, 2016 – Megware steps up as BeeGFS Platinum Partner: Megware is the first BeeGFS provider to achieve Platinum status
  • Nov 15, 2016 – New Major Release with Metadata High-Availability: The new BeeGFS major release version 6.0 is now available for download. It comes with built-in metadata high-availability based on replication with self-healing. Full press release is available here. Changelog and download is available here.
  • November 09, 2016 – “Last night a BeeGFS saved my life” presentation: In this slide set titled “Last night a BeeGFS saved my life”, the University of Strasbourg HPC center presents their experiences with different distributed file systems.
  • November 03, 2016 – BeeGFS is OpenPOWER Ready: We have joined the OpenPOWER Foundation as an official member. High memory bandwidth, the CAPI interface for fast network access and the NVLink interface make this platform very interesting for computation and storage, including converged setups for HPC, BigData, BioInformatics, Deep Learning, Seismic Imaging, Material Sciences, Radio Astronomy and much more… BeeGFS on IBM BigData P8 servers reference benchmarks: View whitepaper OpenPOWER European Summit Press Release: View press release
  • October 28, 2016 – RAID Inc new Gold Partner in USA: RAID Inc., a custom technical computing solutions company, is a new North American Gold Partner for the BeeGFS. The team behind BeeGFS looks to this strategic partnership as an opportunity for RAID Inc. to enlighten its portfolio of high performance computing and Big Data infrastructure solutions across diverse markets as Genomics, Drug Discovery, Research, Semiconductors, and Financial Services. RAID Inc. will be exhibiting this parallel cluster file system’s storage performance prowess at Supercomputing Conference (SC16), the HPC community’s flagpole event, in Salt Lake City Nov. 13-18, 2016 at booth 809.
  • July 22, 2016 – BeeGFS in China: Thanks to our partner JZTech, the BeeGFS website is now also available in Chinese. The Chinese BeeGFS website is available here: www.beegfs.cn
  • July 17, 2016 – BeeGFS on World’s fastest Computers: In the most recent update of the Top500 list, which contains the fastest computer clusters in the world, 6 new systems came from Germany. Out of these 6 systems, 4 are powered by BeeGFS: NEMO-Cluster by Dalco at the University of Freiburg Cray CS400 system at the Alfred-Wegener-Institute in Bremerhaven BinAC-Cluster by Megware at the University of Tübingen Minerva-Cluster by ClusterVision at the Albert-Einstein-Institute in Potsdam
  • July 11, 2016 – BeeGFS on Amazon Cloud: BeeGFS is now available in the Amazon Web Services (AWS) Marketplace and can be deployed very easily on your own choice of virtual machines in the Amazon Cloud. By default, the stored data will remain persistent even while your virtual machines are shut down. BeeGFS takes cloud computing to the next level, giving you now the opportunity to even run those applications in the cloud that need high performance access to disk data.
  • July 05, 2016 – BeeGFS Docker Volume Plugin: A BeeGFS docker volume plugin to create persistent volumes in a BeeGFS storage cluster is available on github from RedCoolBeans. The BeeGFS docker volume plugin on github is available here. The BeeGFS docker volume plugin presentation from ISC16 is available here.
  • July 04, 2016 – BeeGFS with Mellanox EDR IB: Mellanox created a new solution document, demonstrating how to get optimal performance with BeeGFS over Mellanox ConnectX-4 EDR InfiniBand, resulting in full saturation of the link with just a few streams and over 9GB/s single-stream throughput.
  • June 27, 2016 – BeeGFS All-Flash Appliance: While the major part of the competition is still trying to figure out how to make use of SSDs for caching, our partner ScalableInformatics has announced all-flash (NVMe) appliances powered by BeeGFS, which deliver a sustained write throughput of 11.6GB/s and a sustained read throughput of 11.8GB/s per server. Thanks to BeeGFS being a parallel software defined storage solution (SDS), this performance can of course even be scaled easily by adding more servers. The appliance comes at a very attractive price starting at only 1USD/GB and is very well suited for HPC, Big Data Analytics and all other workloads where maximum storage access performance is required, e.g. Life Sciences, Oil & Gas, Finance etc.
  • June 06, 2016 – BeeGFS Intel Omni-Path Certification: In cooperation with Intel, BeeGFS has been certified for the new Intel Omni-Path Architecture (OPA) network technology, delivering up to 12GB/s per server at low CPU utilization.

    • Full press release is available here.
  • February 23, 2016 – BeeGFS Goes Open Source: We know it has been a long wait for some people, but the complete BeeGFS source code is now finally available for download. Full press release is available here- The source download is available here.
  • November 03, 2015 – SC Austin: Going to the Supercomputing Conference in Austin, TX? Make sure to stop by our booth no. 2022. As usual, we’re giving away beecopters and we’ll have a casual booth party on Tuesday afternoon.
  • October 01, 2015 – SEG New Orleans: We’ll be at the SEG Annual Meeting (booth no. 2045) in New Orleans for the first time. Looking forward to meeting you there.
  • August 12, 2015 – New Major Release with Storage High-Availability: The new BeeGFS major release version 2015.03-r1 is now available for download. It comes with enterprise features like built-in storage server high-availability based on replication with self-healing, support for access control lists (ACLs), and adds a number of performance and usability improvements. Full press release is available here. Changelog and download is available here.
  • July 26, 2015 – BeeGFS in the Top500 list: This month at the International Supercomputing Conference in Frankfurt, Germany, the official update of the top500.org list was published. We’re happy to see that BeeGFS is currently powering the storage of 6 of these fastest machines in the world: VSC-3 at Vienna Scientific Cluster centre, Austria LOEWE at Universtiy of Frankfurt, Germany MLS&WISO at University of Mannheim/Heidelberg HUMMEL at University of Hamburg, Germany OCuLUS at University of Paderborn, Germany ABEL at University of Oslo, Norway
  • May 13, 2015 – New BeeGFS Whitepapers: ThinkParQ, the BeeGFS service company, has created two new whitepapers that show how to configure the hardware and software of BeeGFS storage and metadata servers and compare the effect of different setups. View: Picking the right number of targets per storage server for BeeGFS [PDF] View: Metadata Performance Evaluation of BeeGFS [PDF]
  • March 31, 2015 – First Beta with Storage High-Availability: Today we are very proud to announce availability of the first beta release for the upcoming BeeGFS version 2015.03 major release series. It provides support for one of the most demanded features for BeeGFS: Built-in storage server high-availability based on file contents replication. Check out the March newsletter for more information.
  • February 10, 2015 – Scale to Infinity and BeeOND: BeeOND (“BeeGFS On Demand”) was designed to provide a very convenient way to offload performance-cricital applications from global shared storage (no matter whether the global storage is based on BeeGFS or other technology) to HDDs or SSDs in compute nodes by combining these devices in a temporary parallel file system instance. As BeeOND instances are typically created when a compute job starts on the nodes that are part of the job, performance automatically scales with the number of compute nodes on which the job runs. BeeOND also comes with a tool for efficient staging of input data from the global storage and pushing back of result data to the global storage when a job is done.
  • December 19, 2014 – BeeGFS Quota Update: BeeGFS update 2014.01-r10, which has been made publicly available for download today, comes with a highly demanded feature: Quota enforcement. The update also introduces a new C API to allow applications to influence stripe width, adds bash-completion support for the parameters of the command-line tool fhgfs-ctl and improves creation of parallel file system instances on-demand (fhgfs-ondemand-v2), which is very popular for creation of burst buffers on compute nodes.
  • November 03, 2014 – BeeGFS Introduction Whitepaper: ThinkParQ, the BeeGFS service company, has created a whitepaper to give a general introduction to BeeGFS. Everyone who is interested in getting an overview and learning more about the architecture and benefits of BeeGFS should definitely check it out.
  • October 30, 2014 – HPCKP’15 with BeeGFS Day: In 2015, the fourth annual HPC Knowledge Portal Meeting, which is organized by HPCNow!and IQTC-UB is combined with a special workshop day for BeeGFS, provided by ThinkParQ. Location: Barcelona, Spain (One of the most beautiful cities in the world) Date: February 02 – 06, 2015 (BeeGFS day: February 02) Meeting Flyer: here Meeting Website: here
  • October 20, 2014 – SC’14 approaching: In November, New Orleans will be the center of the HPC world for a whole week – SC’14 will take place from 16th to 21st of November. Leading experts from research, academia and industry will come together to exchange ideas and look at the latest technologies. Fraunhofer and ThinkParQ will exhibit at the SC’14 this year and present the latest news and updates around the parallel filesystem BeeGFS. Visit us our booth to discuss with our experts or get a general overview of BeeGFS in the exhibitor forum presentation. We are looking forward to show you latest performance figures and explain upcoming features and functionalities. Fraunhofer and ThinkParQ SC’14 booth #3147 Exhibitor forum SC’14 BeeGFS presentation: Tuesday, 18th Nov, 4pm, room 291 by Jan Heichler
  • June 20, 2014 – OpenZFS Conference: Michael Alexander from the Vienna Scientific Cluster talks about FhGFS/BeeGFS on top of ZFS at the OpenZFS Conference, May 2014, Paris
  • April 03, 2014 – Petabyte Workshop: In close cooperation with Fraunhofer and ThinkParQTranstec is hosting a petabyte workshop with a strong focus on FhGFS/BeeGFS. Register to hear technical presentations, see practical demonstrations or join the evening event together with the people behind FhGFS. Hope to see you there! Registration and details: Petabyte Workshop, May 14, Tübingen, Germany
  • March 12, 2014 – Transition to BeeGFS®: With the community of users and partners getting more and more active and involved, we felt like it would be good to pick a new name for FhGFS. We will try to make the transition to BeeGFS as smooth as possible, so the change to BeeGFS is rather a process than an event. But expect more BeeGFS around here soon. P.S.: Although we were rather thinking about bees as nice and busy animals that work together as a swarm when we picked it, the new name seems to refer many people to a famous boy group from the 1970s – which is also not completely wrong, as BeeGFS will do everything it can to keep your data stayin alive…
Monthly Newsletter
Release Announcements

Subscribe to this low volume mailing list to receive a note when new BeeGFS releases are available.