MarketplaceCommunityDEENDEENProductsCloud ServicesRoadmapRelease NotesService descriptionCertifications and attestationsManaged ServicesBenefitsSecurity/DSGVOSovereigntySustainabilityOpenStackMarket leaderBusiness NavigatorPricesPricing modelsComputing & ContainersStorageNetworkDatabase & AnalysisSecurityManagement & ApplicationsPrice calculatorSolutionsIndustriesHealthcarePublic SectorScience and researchAutomotiveMedia and broadcastingRetailUse CasesArtificial intelligenceHigh Performance ComputingBig data and analyticsInternet of ThingsDisaster RecoveryData StorageTurnkey solutionsTelekom cloud solutionsPartner cloud solutionsSwiss Open Telekom CloudReferencesPartnerCIRCLE PartnerTECH PartnerBecome a partnerAcademyTraining & certificationsEssentials trainingFundamentals training coursePractitioner online self-trainingArchitect training courseCertificationsCommunityCommunity blogsCommunity eventsLibraryStudies and whitepaperWebinarsBusiness NavigatorMarketplaceSupportSupport from expertsAI chatbotShared ResponsibilityGuidelines for Security Testing (Penetration Tests)Mobile AppHelp toolsFirst stepsTutorialStatus DashboardSwitch of cloud providerFAQTechnical documentationNewsBlogFairs & eventsTrade pressPress inquiriesRadio OTCMarketplaceCommunity

0800 3304477 24 hours a day, seven days a week

Write an E-mail 

Book now and claim 250 € starting credit
ProductsCloud ServicesManaged ServicesBenefitsBusiness NavigatorPricesPricing modelsPrice calculatorSolutionsIndustriesUse CasesTurnkey solutionsSwiss Open Telekom CloudReferencesPartnerCIRCLE PartnerTECH PartnerBecome a partnerAcademyTraining & certificationsCommunityLibraryBusiness NavigatorMarketplaceSupportSupport from expertsHelp toolsTechnical documentationNewsBlogFairs & eventsTrade pressPress inquiriesRadio OTC
  • 0800 330447724 hours a day, seven days a week
  • Write an E-mail 
Book now and claim 250 € starting credit

BWI Hackathon leverages superior GPU Power with NVIDIA H100 and L4

by Editorial Team
BWI Hackathon: Trophies
The trophies at the BWI Hackathon
 

In this article, you will learn

  • what the German armed forces uses artificial intelligence for,
  • how rapidly prototypes can be developed in a hackathon setting
  • and why cloud-based GPUs are critical for AI innovation.


From idea to prototype in just five days: at the joint Data Analytics Hackathon organized by BWI and the Bundeswehr, teams developed AI-based solutions for concrete Bundeswehr use cases, supported by GPU resources from the Open Telekom Cloud. 

The hackathon: the challenge

At the end of November, BWI, the IT service provider of the German Armed Forces, hosted its seventh BWI Data Analytics Hackathon at the Aumühle site in Bonn. More than 100 participants, working both on site and remotely, formed 20 teams. Over the course of five days, they tackled four challenges defined by BWI and the Bundeswehr, each requiring varying degrees of AI deployment. The challenges were as follows:

  1. Wargaming: Speech-to-text analysis to support military decision-making
  2. OSINT: Automated situational awareness using social media, geospatial, and open data sources
  3. Space Weather: Analysis of solar storms and magnetic pulses based on satellite data
  4. Software Engineering: Development of an offline mapping solution comparable to Google Maps for use by the German Armed Forces
A presentation at the BWI Hackathon
A presentation at the BWI Hackathon

Ideas meet a powerful cloud platform

“Code freely, no regular dates, no meetings”: The right mindset is the most important ingredient of a successful hackathon. At the same time, creativity and compelling challenges alone are not sufficient. A hackathon also depends on a powerful, stable, and flexible technical platform that allows all teams to work in a focused and uninterrupted manner.

This is precisely how the Open Telekom Cloud, together with the T-Systems onsite Innovation Lab, actively supported this year’s BWI Hackathon (read more here). From November 24 to 28, participants were able to develop and test their ideas using a dedicated GPU environment. In total, 23 GPU instances equipped with NVIDIA L4 and H100 GPUs were deployed, enabling high-performance execution of a wide range of AI and big data workflows and use cases.

Robert Renz and Florian Krumbholz from the T-Systems onsite Innovation Lab were present throughout the event to review projects and prototypes firsthand. They supported participants by answering technical questions and acting as sparring partners for in-depth discussions and feedback. “It was great to see how quickly ideas became concrete, workable solutions,” they noted. “In the future, we will continue to be ready with sovereign infrastructure, a broad GPU portfolio, and the appropriate know-how to promote innovation with the necessary cloud resources.” 

Much like the hackathon challenges themselves, the T-Systems Innovation Lab delivers onsite projects using a minimum viable product (MVP) approach. Following a brief requirements analysis conducted jointly with the customer, an initial, executable prototype is delivered within a short timeframe. This prototype then serves as the foundation for further iterations, helping to make ideas tangible and actionable. These MVPs typically combine cloud technologies, IoT, and custom software development to create intelligent and innovative solutions.

GPU power for the hackathon

The ECS compute flavors with NVIDIA L4 GPUs (pi5e) and NVIDIA H100 GPUs (p5s) are core components of the GPU portfolio of Open Telekom Cloud. The L4-based instances are optimized for inference, small to medium-sized models, and multimodal workloads, while the H100-based flavors deliver the compute performance required for highly demanding deep learning workloads and advanced fine-tuning scenarios. Together, the GPU flavors featuring NVIDIA L4 (pi5e) and H100 (p5s) underpin a wide range of AI use cases on the Open Telekom Cloud, from rapid prototyping to scalable production deployments. 

NVIDIA L4 in detail (pi5e)

Optimized for efficient AI inference and video processing, NVIDIA L4 GPUs are particularly well suited for applications that must handle a high volume of requests with low latency. In the context of the hackathon, this made them ideal for chatbots, RAG scenarios, recommender systems, and image and text analysis workloads, where throughput and cost efficiency are critical.

  1. Architecture: Ada Lovelace
  2. CUDA cores: 7,424
  3. Memory: 24 GB GDDR6 with 300-384 GB/s bandwidth

NVIDIA H100 in detail (p5s)

NVIDIA H100 GPUs rank among the most powerful accelerators currently available for generative AI and computationally intensive deep learning models. With exceptional compute density and specialized capabilities for large language models and transformer architectures, they form the foundation for complex training, fine-tuning, and extremely demanding inference workloads. 

  1. Architecture: Hopper
  2. CUDA cores: 14,592
  3. Memory: 80 GB HBM3 with up to 3.35 TB/s bandwidth
 

The GPU journey continues …

The hackathon demonstrated that the Open Telekom Cloud offers a state-of-the-art GPU portfolio capable of enabling advanced AI innovation. Organizations that intend to use GPU resources strategically over extended periods can benefit from discounted three-year packages to reduce overall costs. At the same time, the Open Telekom Cloud continues to expand its offerings, with new GPU options already on the horizon.

Double the H100 power: GPUs in tandem

At the beginning of 2026, the Open Telekom Cloud will further expand its GPU portfolio with the new p5e flavor, featuring NVIDIA H100 GPUs connected via NVLink. This configuration combines two H100 Tensor Core GPUs using the high-speed NVLink interconnect, delivering substantially higher bandwidth and scalability than traditional PCIe-based setups. Direct GPU-to-GPU communication enables significantly faster and more efficient execution of AI models, simulations, and training processes. Workloads in generative AI, large language models (LLMs), and high-performance computing (HPC) benefit in particular from this architecture.

Inference with L4 clusters

As outlined above, NVIDIA L4 GPUs are especially well suited for production-grade inference workloads. Complementing single high-performance GPUs, the Open Telekom Cloud is introducing a new architecture based on multiple L4 GPUs deployed in a high-availability (HA) cluster. This approach offers clear advantages in terms of scalability, stability, and load balancing.

By combining multiple L4 GPUs, inference workloads can be scaled horizontally, with requests dynamically distributed across instances to eliminate bottlenecks and ensure balanced utilization. In the event of a GPU or node failure, other cluster nodes automatically assume the workload, ensuring uninterrupted AI services in production environments.

This architecture is particularly well suited for latency-sensitive applications such as chatbots, recommendation systems, and real-time image analysis.


This content might also interest you
 

Woman typing on laptop, diagram with grid pattern superimposed over image

AI for business – opportunities, challenges, and best practices for implementation

Artificial intelligence already offers companies clear advantages today. Those who invest specifically in expanding AI skills now will gain a decisive advantage.

 
Hands of a man on a laptop keyboard, in the foreground of the picture a digital lock in a cloud

Gold standard in the cloud industry: Open Telekom Cloud certified according to BSI C5:2020, and SOC 1, SOC 2, SOC 3

The Open Telekom Cloud meets the strict requirements of the BSI C5:2020 cloud test certificates as well as the SOC 1, SOC 2 and SOC 3 requirements catalog.

 
Symbolic representation of a cloud chip on a circuit board with digital data streams.

More compute power at no extra cost

The new c9 instances of the Open Telekom Cloud deliver significantly more computing power at no additional cost starting in November 2025, powered by Intel Xeon® 6900 CPUs.

 

The Open Telekom Cloud Community

This is where users, developers and product owners meet to help each other, share knowledge and discuss.

Discover now

Free expert hotline

Our certified cloud experts provide you with personal service free of charge.

 0800 3304477 (from Germany)

 +800 33044770 (from abroad)

 24 hours a day, seven days a week

Write an E-Mail

Our customer service is available free of charge via E-Mail

Write an E-Mail

AIssistant Cloudia

Our AI-powered search helps with your cloud needs.