Forget the Magnificent 7 — These 15 AI Stocks Are the Real Winners

Photo of author
Written By pyuncut

AI-15: The Next Wave of the AI Trade — Infographic Report

AI‑15 Infographic Report: Where the Next AI Dollar Goes

Curated from a conversation about Futurum Equities’ “AI‑15” — a quarterly‑rebalanced basket focusing on non‑Mag‑7 enablers across chips, interconnect, data, and AI utilities.

Quick Summary

3Core monopolies highlighted: TSMC ASML Broadcom
90 daysRebalance cadence to follow “new money” in AI
4Primary layers: Hardware · Interconnect · Data Infra · Utilities
1Key risk: becoming “just a layer” that hyperscalers can replace
Core Enablers Usage/Utilization Power/Cooling Disintermediation Risk

AI Stack — Where Value May Accumulate

Hardware / Fab
Interconnect / Networking
Data Infra / Storage
Ops / Governance
Edge / Utilities

Illustrative relative opportunity weighting for 12–24 months. Not investment advice.

Featured Names & Roles

Core Enablers (moat‑rich)

  • TSMC — “AI factory.” Advanced node wafer capacity that underpins cutting‑edge GPUs/ASICs.
  • ASML — Lithography monopoly (EUV/High‑NA). Essential for next‑gen chipmaking.
  • Broadcom — High‑speed interconnect, custom silicon; the “toll booth” for AI data movement.
  • AMD — Alternative accelerators and inference engines; share gains if inference demand explodes.

Data & Operations Layer

  • Palantir — Data operating system, governance, model‑to‑enterprise connective tissue.
  • Snowflake, MongoDB — Data rails and stores; feed proprietary data to models.
  • Oracle, IBM — Compliance, security, database backbones for regulated workloads.

Utilities & Hosts (capacity bottlenecks)

  • Power & Cooling — Rack density, liquid cooling, substations become gating factors.
  • AI Hosting — Metered usage models; some former miners/neoclouds pivot to AI utility.
  • Edge — Bringing inference closer to data/users; connectivity and resiliency matter.

Watch‑outs

  • “Layer” Risk — If a product is replicable by hyperscalers, usage can compress.
  • Overbuild — Capex surges may overshoot near‑term demand; margins at risk.
  • Policy — Export controls, data sovereignty, and national security constraints.

Signals & Metrics to Track

ThemeLeading IndicatorInterpretation
UtilizationNRR, $/workload, time‑to‑productionUsage > logos — revenue expansion beats customer count.
SupplyFab lead times, tool backlogStretch implies pricing power; easing warns of normalization.
InterconnectDC networking upgradesFaster refresh cycles favor high‑end switch/optics vendors.
PowerMW secured, PUE, cooling retrofitsSites with energy advantage monetize capacity sooner.
SecurityCompliance wins, regulated verticalsSticky ARR; lowers rip‑and‑replace risk.

Risk Heat (Illustrative)

Valuation Stretch
Power Constraints
Disintermediation
Export/Policy
Overbuild/Capex

Allocation Framework (Example)

Example only — tailor to risk tolerance, horizon, and constraints.

BucketGoalIllustrative NamesAllocation
Core EnablersMoat & cash‑flow resilienceTSMC, ASML, Broadcom35–45%
Growth EnginesHigher upside, higher betaAMD, Palantir, Snowflake, MongoDB25–35%
Ops & GovernanceCompliance in regulated AIOracle, IBM10–15%
Utilities/EdgePower, cooling, hostingAI utility & edge names10–15%
OptionalityEmerging/small‑cap betsSelective innovators5–10%

Rebalance cadence: quarterly; add on pullbacks; trim on parabolic moves.

Due‑Diligence Checklist

Business Quality

  • Does the product sit in a *must‑have* critical path, not a replaceable layer?
  • Evidence of pricing power (gross margin stability, backlog, renewal terms)?
  • Customers in regulated verticals (defense, healthcare, finance)?
  • Low churn, high NRR; usage growth outpacing logo growth?

Execution & Risk

  • Capex discipline vs. growth; ROIC trend improving?
  • Energy & cooling secured; PUE competitive?
  • Geopolitical exposure manageable (supply chain, export controls)?
  • Clear roadmap for inference‑heavy workloads?

Narrative Map: How the AI Dollar Flows

Hyperscaler OCF →
Compute (GPUs/ASICs) →
Interconnect/Networking →
Data Infra & Storage →
Ops/Governance →
Utilities (Power/Cooling/Hosts) →
Monetized Workloads

Operating cash flow from hyperscalers funds the stack; workloads monetize as capacity comes online.

Actionable Takeaways

  • Favor moats. TSMC, ASML, Broadcom remain foundational for advanced AI at scale.
  • Track utilization, not logos. Usage‑based revenues validate true adoption.
  • Respect power constraints. Sites with cheaper, reliable energy & advanced cooling can out‑earn peers.
  • Stage entries, rebalance quarterly. Use pullbacks/rotations — avoid chasing blow‑offs.
  • Diversify across the stack. Mix core enablers with data/ops and a measured utility sleeve.
Compiled for reference. Not investment advice. © 2025
Print‑friendly · Mobile‑optimized · Last updated:

The AI Trade: From the Magnificent 7 to the Rising AI-15

Artificial intelligence is no longer a speculative theme — it is fast becoming the core axis upon which the next decade of growth will spin. But many investors are rightly asking: we already have Nvidia, Microsoft, and the rest of the “Magnificent 7” (or “Mag-7”) dominating headlines — is there more upside beyond them? And if so, how should one structure a portfolio to ride the AI wave without paying an exorbitant valuation premium for every exposure?

Enter Futurum Equities’ “AI-15”, a curated list of 15 non-Mag-7 names that Shai (Shay) Boloor describes as the “new money” destinations in the AI economy. The AI-15 seeks to spot the infrastructure, data, control, and expansion layers where the next wave of capital is flowing — the parts of the AI stack that may not yet be fully “priced in.”

In this post, we’ll:

  • Unpack Boloor’s rationale behind the AI-15 and how it is constructed
  • Decompose the layers of the AI stack (hardware, software, interconnect, utilities)
  • Highlight a few standout names and their pros/risks
  • Identify key roadblocks and bear-case scenarios
  • Offer a framework for thoughtful portfolio allocation into the AI trade

1. The Genesis and Methodology of the AI-15

At its core, Boloor’s ambition is to go beyond the obvious — the Nvidia, Microsoft, Meta, and Google names that have already surged — and to capture where fresh capital is flowing in the AI ecosystem today. As he describes:

“We wanted to create a list that was going to be rebalanced every 3 months on where new money in the AI economy is going right now at this moment.”

Thus, the AI-15 is not a static “best of” list but a dynamic barometer, refreshed quarterly, capturing the evolving undercurrents of the AI investment frontier. The idea is that the second, third, or fourth waves of opportunity lie beneath the surface layers—companies enabling, enabling-enabling, or plugging new gaps in the AI infrastructure.

In publicly shared materials, Futurum categorizes the AI-15 across three broad layers (or “control, operation, expansion”) and highlights top names such as AMD, TSMC, Broadcom, ASML, Palantir, Snowflake, Oracle, IBM, CrowdStrike, among others. (OKX)

According to Boloor, the logic here is:

  • The first wave of AI adoption was hardware/compute (e.g., GPUs).
  • The second wave was software: embedding generative AI into product suites (e.g,. Microsoft Copilot, Google Gemini in Search).
  • But he believes much of that is already “priced in,” so the next wave will emphasize the deeper plumbing: data infrastructure, interconnect, inference, power & utility, data governance, and edge AI — the places where AI workloads must run reliably, at scale.

By focusing on non-Mag-7 names, the AI-15 is designed to find incremental alpha, not chase already crowded names.


2. Dissecting the AI Stack: Where Value May Accumulate

To understand the AI-15 framework, it helps to view the AI stack in layers. Below is a simplified architecture:

LayerFunctionKey Risk / ChallengePotential AI-15 Names
Hardware / FabricationThe physical substrate: chips, lithography, wafersGeopolitics, supply chain, fabrication capacityTSMC, ASML, Broadcom
Control / Interconnect / NetworkingMoving data rapidly and reliably across systemsLatency, congestion, thermal limitsBroadcom, Cisco, Qualcomm
Inference Engines & AcceleratorsRunning the models, processing AI workloadsPower, heat, chip design constraintsAMD, (others)
Data Infrastructure / Storage / MemoryFeeding models with the right data, storing large datasetsThroughput, security, cost per bitMicron, MongoDB, Snowflake, Oracle
Operating / Management / GovernancePower, heat, and chip design constraintsModel drift, compliance, interoperabilityPalantir, Oracle, IBM, ServiceNow
Edge / Expansion / Host InfrastructureAI at the edge, utilities, powering smaller deploymentsConnectivity, robustness, decentralizationNebius (example from Boloor), Iron (power/hosting)

Boloor often uses metaphors to bring his thesis alive:

  • He calls Broadcom the “Uncle Sam of AI” because it “taxes all the workloads that are happening in AI” — i.e., much of data center interconnect and networking runs through it.
  • He highlights TSMC and ASML as underappreciated monopolies: TSMC is the “AI factory” making wafers, while ASML builds lithography machines that are essential for advanced chip manufacturing.
  • He warns of software-layer names that become “layers” in the tech stack — easy for hyperscalers to replicate or compete with — citing Datadog as a name that could see overlap with AI-native observability tools.

One emergent theme Boloor emphasizes is inference — the idea that once model training becomes cheaper, the bulk of computational burden will shift to inference (i.e., executing the model). As he notes:

“As training data gets lower and lower, inference will explode … that’s where you’re seeing AMD be a major beneficiary … memory, networking, Broadcom, all benefit from inference workloads.”

Yet another theme is AI utility & power/cooling / hosting. He describes new players (some formerly miners or cloud-like hosts) that are pivoting to metered, usage-based AI hosting funded directly by hyperscalers. Names like Nebius and Iron (not all public) are examples he cites. He argues these firms are part of a nascent “AI utility” arms race.


3. Standout AI-15 Picks: Highlights & Risks

Below are several names from the publicly shared AI-15 lists (with commentary), plus caveats to watch.

Broadcom (AVGO)

  • Strengths: Dominant in high-speed interconnect, ASICs, networking — crucial for data movement in AI data centers.
  • Thesis: It “taxes” AI workloads; more demand means more revenue.
  • Risks: If hyperscalers build more vertically (i.e., develop in-house networking/RoCE, switching fabrics), that could encroach on Broadcom’s territory.

TSMC

  • Strengths: The premier contract wafer foundry; virtually all advanced chip makers depend on it.
  • Thesis: It is the “AI factory” — without its wafers, next-gen silicon can’t exist.
  • Risks: Geopolitical pressure (e.g., cross-strait, export controls), fab capacity constraints, competition.

ASML

  • Strengths: Monopoly in extreme ultraviolet (EUV) and next-gen lithography tools, which are essential for advanced nodes.
  • Thesis: Without ASML tools, cutting-edge chips are impossible.
  • Risks: Technological breakthroughs elsewhere, trade restrictions, competition from alternative lithography approaches.

AMD

  • Strengths: Well-positioned to capture a slice of inference demand; competitive chip architecture.
  • Thesis: Even capturing a modest share of the AI acceleration/inference market (~10%) is meaningful.
  • Risks: Intense competition vs. Nvidia, architectural mismatch, or underperformance vs expectations.

Palantir (PLTR)

  • Strengths: Data operating systems, ontology, governance, and a platform that many enterprises already use.
  • Thesis: It sits above the models — the interface between model and enterprise.
  • Risks: Dependence on defense contracts, customer concentration, and competitive pressure from cloud providers.

Snowflake (SNOW), MongoDB (MDB)

  • Strengths: Data storage, pipelines, and enabling proprietary data to feed into models.
  • Thesis: Models are nothing without high-quality, structured data — these firms help supply that.
  • Risks: Compression of margins, commoditization, competition, or disintermediation by AI-native tooling.

Oracle (ORCL), IBM

  • Strengths: Deep infrastructure in governance, security, enterprise relationships, and database backbones.
  • Thesis: As data flows, these names may get reappraised for their role in governing, tracking, and securing AI processes.
  • Risks: Legacy baggage, slower growth, difficulty adapting to cloud/AI-native paradigms.

Utility / Host / Edge Names (e.g. Nebius, Iron)

  • Strengths: Physical hosting, cooling, and scalable AI infrastructure to support massive inference loads.
  • Thesis: AI needs not just compute, but electricity, data rails, and cooling — whoever builds the scalable utility backbone may prosper.
  • Risks: Capex intensity, energy costs, regulatory constraints, demand volatility.

It’s worth noting that Futurum’s posted AI-15 list organizes names into control, operating, and expansion layers, with the top 3 “AIRometer” scores (at a given snapshot) going to TSM, AVGO, PLTR. (X (formerly Twitter))

Because the list is rebalanced quarterly, staying current with the score trends is crucial.


4. Key Roadblocks & Bear Cases: The Peaks Before the Troughs

Even with a compelling structural thesis, the AI trade faces real challenges. Below are the principal “mountains” that Boloor warns investors must navigate.

A. Being Just a “Layer” — Easy to Disintermediate

One of Boloor’s more provocative warnings is that many tech firms behave as “layers” on the stack — i.e. non-critical, replaceable abstractions that hyperscalers or AI-native firms might build in-house.

  • He cites Datadog: though a leader in observability, its services may be replicated by OpenAI or other AI firms themselves (or via open-source observability tools).
  • His counsel: don’t watch customer count; watch utilization, net revenue retention, and how sticky usage is over time.

If your software is “nice to have” rather than “must-have,” the emerging AI arms race might bypass you.

B. Power, Cooling, and Infrastructure Bottlenecks

One frequently under-discussed constraint is energy and physical infrastructure. AI compute is power-hungry, heat-generating, and often requires co-location, advanced cooling, and a reliable energy supply.

  • Boloor warns that the grid is not ready for what’s coming.
  • The near-term bottleneck may shift from compute to rack-level hosting, cooling capacity, electrical transformers, and power distribution.
  • If data centers remain power-constrained, it could throttle growth.

C. CapEx, Overinvestment & “Zoom-Churn”

The AI trade is not immune to capital discipline risks. Too much buildout too quickly can lead to overcapacity, thinning margins, or stranded assets — especially in smaller names.

  • In historical analogues (e.g., telecom, cloud), many infrastructure names overbuilt and got punished in downturns.
  • If capital markets tighten or macro stress appears, high-beta AI infrastructure names may suffer first.

D. Valuation Stretch & Multiple Compression

Some AI-15 names likely carry lofty multiples already (due to narrative). If sentiment reverses or growth disappoints, multiple contractions may drag stocks lower even if fundamentals hold.

  • Investors should expect volatility, both in individual names and in the theme as a whole.
  • The risk of an AI bubble or sharp retracement is real — Boloor doesn’t deny it: “There’s a decent chance we see a bubble and a burst.”

E. Regulatory, Geopolitical, and Trade Risk

The AI infrastructure domain is deeply entangled with national security, export controls, chip supply chains, and data sovereignty. A few possible risks:

  • Tighter export controls on advanced node chips (e.g., EDA tools, EUV sources).
  • Sanctions on Chinese chip foundries or firms.
  • National policies prioritizing domestic sourcing.
  • Security compliance (e.g. data governance, privacy laws) that could slow AI adoption or force localization.

Any of these could disproportionately impact players in hardware, interconnect, or cross-border data movement.

F. Customer Consolidation by Hyperscalers

One recurring danger is that the big cloud/tech companies will internalize more of the stack. They may absorb or replicate features currently provided by AI-15 players.
For example, Microsoft is building its own observability, OpenAI is building internal infrastructure, or cloud providers bundling “AI as a service” layers.

Once that happens at scale, smaller vendors may be squeezed.


5. Portfolio Approach: How to Play (and Hedge) the AI Trade

Riding the AI wave is less about guessing which single name will 10× and more about building a resilient, diversified posture across the stack — while hedging against extreme outcomes.

Below is a suggested framework:

A. Allocate in Tiers / Buckets

Divide exposure among a few buckets:

  1. Core / “Safe” AI picks (blue chips): TSMC, ASML, Broadcom — names with entrenched moats and proven scale.
  2. Growth / Mid-size picks (modest conviction): Palantir, Snowflake, AMD, Oracle — higher growth but higher risk.
  3. Infrastructure utilities / early bets: AI hosting, edge compute, specialty infrastructure players (if public).
  4. Selective small caps/optionality plays: Where one or two names could break out, but remain high risk.

This tiered approach tempers exposure to any one name blowing up.

B. Use Scaling & Staging

  • Begin with exposure to core names, then scale into conviction names over time.
  • Avoid “all-in” bets early; instead, average in to themes as new quarterly rebalance insight arrives.
  • Keep some dry powder to opportunistically add on pullbacks, rotations, or when new names join the AI-15.

C. Monitor Key Valuation & Adoption Metrics

Don’t rely solely on share price — track:

  • Utilization/usage growth (not just customer growth)
  • Net Revenue Retention (NRR) and expansion revenue
  • CapEx / gross margin trends
  • Load factor/occupancy/energy efficiency in infrastructure names
  • AIRometer / internal scoring shifts from the AI-15 insights

If growth decelerates or utilization stalls, it may presage multiple compressions.

D. Diversify Across Risk Factors

Because the AI trade sits at the intersection of tech, energy, infrastructure, and geopolitics:

  • Include hedge positions or sleeves in utilities, energy, or software to balance risk
  • Limit single-stock concentration — no more than a small percentage of your portfolio per name
  • Use options hedges (e.g. puts, collars) for key names if sentiment becomes frothy

E. Time the Entry Windows

Beware chasing a parabolic move. Many AI-related names may see strong momentum runs — but entering too late risks catching a blow-off top. A better approach:

  • Watch for post-earnings pullbacks, sector rotations, or macro sell-offs as potential entry points
  • Use quarterly rebalance updates from Futurum to get signals on which names are gaining or losing conviction
  • Maintain flexibility: if one name is upgraded/downgraded out of the AI-15, shift exposures accordingly

F. Be Ready for a Multi-Year Cycle

The AI transition is not linear. Expect:

  • Strong uptrends, but also multi-year “digestion” periods
  • Volatile swings, rotation between value/growth, and narrative stress
  • Periods where the multiples “catch up” with technology adoption, not the other way around

Boloor suggests we may be just ~year 2 of a 4–5 year bull market. That implies that those who stay disciplined and patient may be rewarded over time. (PodScripts)


6. Illustrative Scenario: How This Might Play Out

Let’s sketch a hypothetical multi-year trajectory to illustrate how the AI-15 trade might evolve (this is for narrative clarity, not a prediction).

Year 1–2: Foundation & Catch-up

  • Nvidia, Microsoft, Meta, and Alphabet lead the rally (hardware + AI embedding).
  • Investors start asking: “What’s next beyond the headline names?”
  • AI-15 names begin capturing attention and capital flows.
  • Infrastructure names (e.g., Broadcom, TSMC) deliver consistent earnings, and multiples start to rerate upward.
  • Software/data names (Palantir, Snowflake, MongoDB) show strong utilization growth and perhaps IPOs, partnerships.
  • Utility infra names (edge compute, hosting) begin signing deals with hyperscalers.

Year 3: Rotation & Breadth

  • As compute availability scales, more workloads move from proof-of-concept to production-level inference.
  • Capital rotates into the mid/long tail of the AI stack; more small- and mid-cap names from the AI-15 see outsized gains.
  • Sentiment becomes exuberant, valuations stretch further; speculative names begin to emerge.
  • Early signs of overheating or breadth narrowing may surface.

Year 4–5: Derisking & Capital Discipline

  • Growth begins to moderate; valuations come under scrutiny more broadly.
  • The “survivor” infrastructure names remain winners; weaker ones contract or consolidate.
  • The first major bubble/valuation reset may occur in more speculative AI plays.
  • Investors who held a diversified AI-15 basket may be able to lock in gains, rebalance, or rotate into adjacent themes (quantum, robotics, etc.).

While the above is stylized, it underscores that the AI trade is long and uneven, with rotations between defensive infrastructure, growth software, and speculative optionality.


7. Final Thoughts & Cautions

The AI-15 approach offers a compelling middle ground: not the ultra-safe “Magnificent 7,” yet not blind speculative bets. It targets structural enablers of AI growth, while tilting toward fresh capital flows rather than re-rated popularity.

Still, some cautions:

  • The list is only as good as the ongoing rebalance and insight discipline; if semantics change, older names may fade.
  • There’s always a risk of crowded trades once the AI narrative deepens; many names could become correlated and vulnerable to systemic drawdowns.
  • Macro factors — interest rates, energy prices, supply chain shocks, regulatory shifts — can disrupt even the strongest AI stories.
  • Be humble about timing. The AI trade may reward those who patiently stage bets rather than trying to catch every move.

Leave a Comment