quarta-feira, abril 30, 2025
HomeBig DataAI Today and Tomorrow Series #4: Frontier Apps and Bizops

AI Today and Tomorrow Series #4: Frontier Apps and Bizops


AI Today and Tomorrow Series #4: Frontier Apps and Bizops

(Urban Images/Shutterstock)

Welcome to the fourth entry in this series on AI. The first one introduced the series, the second discussed artificial general intelligence, and the third reported HPC users’ expectations and concerns about the HPC-AI convergence. The topic today is the relationship between leading-edge, frontier AI and AI use for less-edgy things, including business operations—bizops. Much of this content is supported by Intersect360 Research’s in-depth interviews with HPC and AI leaders around the world. As always, comments are welcome at [email protected].

AI Apps: Formula 1 Racers vs. Family Sedans

The AI market today seems increasingly divided into two main camps: “AI-heavy” frontier applications—the equivalent of Formula 1 race cars—and a broader group of “AI-light” applications aimed at improving the efficiency and effectiveness of day-to-day business operations (and personal activities)—analogous to family sedans, though much newer and less tested as a species.

Frontier AI applications promise to advance the AI state-of-the-art. They span many scientific and commercial domains: bioscience and healthcare, computer science, defense, energy, humanities and social sciences, manufacturing and more. Frontier apps are in the forefront of the journey to AGI. They are also where novel AI misuse and other harmful surprises are seen as most likely to emerge; therefore frontier AI is the primary target of AI regulations around the world.

Formula 1 racing innovations are applied to factory-produced cars (jamesteohart/Shutterstock)

The less-edgy bizops camp includes marketing and sales activities, customer relations, supply chain operations, finance, HR and other long-standing corporate functions. The ease-of-learning and ease-of-use of ChatGPT and other generative AI tools has given AI a big boost in the commercial world. A 2024 pulse poll of 250 technology leaders by professional services firm EY found, among other things, that 64% of the companies had programs to help employees keep pace with generative AI advances. Popular functions for AI bizops include parsing domain literature (e.g., medical journals, patent libraries), optimization, training, decision support, quality control and predictive maintenance. Over time, agentic AI promises to turbocharge many of these functions.

The AI Communities Are Wedlocked

There’s little chance that the frontier AI community will split out to form a separate ecological niche while the workaday bizops community evolves on its own inertial path. The two camps seem destined for a long, productive marriage. Just as Formula 1 racers try out new technologies that may later benefit family sedans, the frontier AI community is a proving ground for the AI workaday world. Conversely, extensive use in the larger workaday AI world should harden new technologies and make them more efficient and affordable for everyone, including the frontier community—a virtuous cycle.

Another reason the two camps are unlikely to divorce, as noted in article 3 in this series, is that both

live on a similar, HPC-derived infrastructure and continually exchange advances. Shared infrastructure elements originating in HPC include standards-based clusters, message-passing (MPI and derivatives), high-radix networking technologies, storage and cooling technologies, to name a few.

Frontier Science and Business: Lessons from HPC

As we know, AI isn’t the birthplace of IT support for frontier science or its relationship with commercial applications. Industry, initially the automotive-aerospace sector, began buying supercomputers and building HPC data centers in the late 1970s. Industrial firms in many sectors today rely heavily on HPC in their own HPC data centers, in commercial cloud environments, and at government centers around the world that provide access to leadership-class supercomputers for frontier (“breakthrough”) work.

Because of the close relationship between HPC and AI, some important lessons learned in the HPC community will likely apply to the AI world as well. Some organizations are already applying them:

  • Industrial problems can be just as challenging as frontier scientific problems. That’s one reason why governments around the world give industrial firms of all sizes access to leadership-class supercomputers and HPC expertise (now also AI and quantum computing resources) for potential breakthrough work. A few of many examples: DIRAC, DOE INCITE, EPCC, HLRS, Pawsey Supercomputing Centre, RIKEN, Shanghai Supercomputer Center, Teratec.

    The Fugaku supercomputer, a system jointly developed by RIKEN and Fujitsu Limited, and based on Arm technology, was the world’s largest supercomputer in 2020

  • Collaborations between publicly supported HPC centers and industry typically benefit both parties. A 2017 study for NSF I co-led with NCSA collected best practices in partnerships between HPC centers and industry. Both parties reported high levels of satisfaction and strong benefits: “The industrial partners reported benefits including increased competitiveness, new discoveries and insights, and faster development of products and services, among other advantages. The surveyed HPC centers reported benefits including unexpected new pathways for science, increased motivation and retention of their scientific and computational personnel, and additional revenue for reinvestment in the centers.” (Half of the industrial firms surveyed in the study were first-time users of HPC technology and expertise.)
  • HPC has migrated into enterprise data centers. In another worldwide study I was involved in, 36% of commercial respondents said they were using HPC in their enterprise data centers. Many of these companies (but not all) had been using HPC in dedicated HPC data centers for manufacturing and other traditional purposes. They saw the transformational results and decided to insert HPC—typically small systems—into the enterprise data center workflow, typically at bizops pain points where the enterprise servers were unable to meet new technical and business challenges without HPC help.

These important lessons already learned in the global HPC community promise to speed the dissemination of AI in both the HPC market and the broader hyperscale AI market. I should probably add one more important lesson to the list.

Overcoming the Snob Factor

Another important HPC achievement that presents a lesson for the AI community is overcoming the prejudice of leadership-class supercomputer users toward the larger group of entry-level and midrange HPC systems. In the early HPC era, monolithic high-end ($25-30 million) supercomputers were the only choice available to buyers—government agencies and large industrial firms—and substantial prestige was attached to these systems.  The explosive market growth of more broadly affordable standards-based clusters starting in the early 2000s temporarily split the HPC community along class lines with much-used descriptors such as “capability vs. capacity systems” and not-uncommon assertions that the capacity users weren’t really doing HPC.

(Ollyy/Shutterstock)

By the end of the 2000s decade, nearly 80% of HPC systems sold around the world were clusters priced below $250,000 and this group contributed more than half of global HPC system revenue. By this time, the frontier (“high-end”) HPC community had largely accepted the newcomers and forged productive relationships with many of them. This is also when the lessons listed above were learned, including the challenging nature of some industrial-business problems and the mutual benefits of collaborations between public- and private-sector HPC user organizations.

So, in this respect, too, the bonds forged within the HPC community between large and smaller users, public- and private-sector organizations, those working at the frontier to advance the state-of-the-art and those putting innovations to use in production environments have set a collaborative tone for the AI community.

DeepSeek Approach as Frontier Democratizer?

The recent DeepSeek news showed, among other things, that impressive AI results can be achieved with less-expensive GPUs and smaller, less-generalized (more domain-specific) models that require less training data—along with less time, money and energy use. In the weeks after the DeepSeek announcement, dozens of other organizations tried this approach, which may also have used the shortcut of distilling, developing a model from an existing model rather than from scratch. This approach might further unify the AI community by helping to democratize frontier AI, making it feasible for more than just the largest, most well-heeled organizations.

What’s fair to conclude?

  • Important lessons the HPC community learned (after initial discomfort) set the stage for HPC user organizations of all sizes to collaborate with mutual respect, as they do today.
  • This collaboration has created a virtuous cycle. Frontier HPC users play the biggest part in research and innovation. The larger non-frontier HPC community of industrial and enterprise IT users, along with smaller government and academic users, hardens and improves innovations through extensive use. The results also are often useful to the frontier HPC users.
  • These lessons are largely applicable to the AI community. Because of this community’s tight relationship with the AI community, frontier and non-frontier AI users (including bizops practitioners) haven’t had to learn these lessons from scratch. Top-to-bottom collaboration within the AI community, and with the HPC community, is already strong and growing.

BigDATAwire contributing editor Steve Conway’ s day job is as senior analyst with Intersect360 Research. Steve has closely tracked AI developments for over a decade, leading HPC and AI studies for government agencies around the world, co-authoring with Johns Hopkins University Advanced Physics Laboratory (JHUAPL) an AI primer for senior U.S. military leaders and speaking frequently on AI and related topics

Related Items:

AI Today and Tomorrow Series #3: HPC and AI—When Worlds Converge/Collide

AI Today and Tomorrow Series #2: Artificial General Intelligence

Watch for New BigDATAwire Column: AI Today and Tomorrow

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments