← Insights

How to Use AI for Infrastructure

Zaki Hasan25 min read

If you develop, finance, operate or invest in infrastructure, you sit at the intersection of three structural realities that distinguish infrastructure from every other asset class. First, the assets are long-lived - concession lives of 25, 50, sometimes 99 years, with capital structures and contractual frameworks built around that duration. Second, the cash flows are governed by contracts and regulatory frameworks rather than by markets - concession agreements, availability payments, regulated returns, take-or-pay contracts, capacity charges, traffic risk, demand risk, all with specific mechanics that vary by asset, by jurisdiction, by vintage. Third, the asset base is heterogeneous in ways that have no parallel - a portfolio that includes a Spanish toll road, an Australian airport, a UK water utility, a US data centre, a German fibre network, a Canadian PPP hospital and a Texan transmission line is one portfolio for the fund, and seven entirely different analytical problems for the team supporting it.

The infrastructure investment industry has scaled massively over the last fifteen years. Aggregate AUM in unlisted infrastructure has grown from roughly $300bn in 2010 to over $1.3 trillion today. The big platforms - Brookfield, Macquarie, Global Infrastructure Partners, KKR Infrastructure, Stonepeak, EQT Infrastructure, IFM, I Squared - each manage tens of billions across hundreds of assets. The analytical workload required to run those platforms properly has grown faster than the headcount has grown to support it. The gap is being held together by spreadsheets, consultants, and very long working weeks.

This article walks through, in detail, what changes when AI lands on this gap. I will start with the workflows that cut across infrastructure regardless of sub-sector - they are where most of the analytical time goes, and where most of the immediate productivity unlock happens. I will then go into the sub-sector specific workflows where the analytical content genuinely differs.


The structural problem in infrastructure

A diversified infrastructure platform managing $20bn across 50-80 assets typically has, at any given moment:

  • Concession agreements and contractual frameworks running to hundreds of pages each, in different legal traditions, in different languages, with different mechanics governing tariffs, performance regimes, force majeure, change in law, termination, hand-back, and dispute resolution. Each contract is negotiated and bespoke. The terms that matter for any given decision are buried somewhere in the document and have to be located by someone who has read the contract recently.
  • Asset-level financial models in Excel. Built by the bid team, refined by the financial advisor, handed to the asset manager. Each model contains the asset's specific revenue mechanics, its operating cost structure, its debt sculpting, its tax position, its base case returns. Models for assets acquired in different vintages were built by different people, follow different conventions, and contain different assumptions. The model that was used to win the bid is rarely the model that gets maintained for ongoing asset management.
  • Operational data in the asset's own systems - SCADA for a utility, DCS for a process facility, an airport operational system, a toll collection system, a network monitoring system for digital infrastructure. Per-asset, in different formats, at different cadences.
  • Regulatory filings, tariff submissions, license documentation. For regulated assets - water utilities, transmission, distribution, airports under regulated frameworks - the regulatory layer is the central determinant of returns, and the regulatory cycle (typically 5 years for UK regulated utilities, 4-5 years in much of Europe) is the central operating rhythm.
  • Lender documentation, debt covenants, hedging arrangements. Each asset has its own debt structure. Each debt structure has its own covenant package. Each covenant test happens on its own cadence and against its own definitions. A coverage ratio that is comfortable for one asset may be tight for another with different mechanics.
  • Performance data against the concession or contract. Availability for a PPP. Lane availability for a toll road. Service quality metrics for a utility. Uplink uptime for a data centre. Each asset has its own KPI framework and its own reporting obligations.
  • Investment committee papers, board papers, LP communications. Each asset goes through quarterly review. Material events trigger ad hoc papers. Each paper requires data assembly from the underlying systems.
  • ESG and sustainability data, which for infrastructure is unusually heavy. Embodied carbon in construction. Operational emissions. Water use. Biodiversity. Community engagement. Worker safety. For European platforms, EU taxonomy alignment and SFDR reporting on top of that.
  • Macroeconomic exposure data - inflation, interest rates, FX, commodity prices, demand drivers - that flows through to asset returns through specific contractual and operational mechanics.
  • Pipeline tracking - bid opportunities, exclusivity discussions, secondary market opportunities - typically in a CRM, with each opportunity tracked by the deal lead.

Now ask a question that crosses these systems. "Inflation expectations have shifted by 150 basis points over the last quarter. Across our portfolio, which assets benefit, which are exposed, and what is the net impact on fund returns? Within the exposed assets, which have contractual mechanisms that will allow recovery on what timeframe, and where is the structural risk that is not recoverable?" That is a question with an answer in the data. The platform team that can produce that answer in a day will run their portfolio differently from the team that produces it in three weeks. Most teams currently produce it in three weeks, badly.

This is the structural reality of infrastructure analytical work. The contractual layer is dense, the assets are heterogeneous, and the cross-asset analytical work that is the core of running a portfolio is held together by manual data assembly. That assembly layer is where AI is now landing.


The cross-cutting workflows

Before getting into sub-sector specifics, there are eleven analytical workflows that cut across all infrastructure sub-sectors. These are where most analytical time goes and where the productivity unlock applies regardless of whether the asset is a toll road or a transmission line.


1. Concession and contract intelligence

Current state: every infrastructure asset is governed by a body of contracts. Concession agreements, PPP contracts, regulatory licenses, off-take agreements, availability payment mechanisms, performance regimes. The mechanics that determine the asset's economics are in the contract. The contract is in a PDF, sometimes in a foreign language, often in a legal tradition the asset manager is not native to.

When a question arises that depends on contractual interpretation - a force majeure event, a change in law, a tariff reset, a performance dispute, a hand-back provision - the answer requires somebody to read the contract. In most platforms, "somebody who has read the contract recently" is a small number of people, and their availability becomes the binding constraint on how quickly a decision can be made.

What AI does: it reads the contracts and maintains a structured representation of the mechanics. The tariff structure of the Spanish toll road is extracted into a queryable form. The availability payment mechanism of the Australian PPP is extracted. The change-in-law protections in the UK water license are extracted. The hand-back regime of the airport concession is extracted. Each clause is linked back to the source contract and page.

When a question arises - does the IRA's prevailing wage requirement trigger a change in law adjustment under our US transmission concession; what is the indexation formula on our French toll road and how does it apply to the new motorway tariff cap; does the upcoming regulatory review on our German distribution asset have a fast-track mechanism - the platform produces the relevant contractual provisions immediately, with full traceability. The legal team still does the legal work. The asset manager starts from a position of knowing what the contract says.

The compression here is structural. A workflow that takes a week of in-house counsel time becomes a workflow that takes a day, because the data assembly is already done.


2. Financial model maintenance and scenario analysis

Current state: every asset has a financial model. The model that was built for the bid is rarely the model that should be running for ongoing asset management - the bid model is structured around a transaction question, not around a continuous operations question. In practice, most platforms maintain one model per asset, modified iteratively as circumstances change, increasingly brittle over time.

When a scenario needs to be run - what does a 100bp move in inflation do to fund returns; what does a 24-month delay in the regulatory determination do to the leveraged return on the water utility; what does a renegotiation of the tariff on the toll road look like under three different concession models - running it is hours of work per asset, multiplied across the portfolio. Most scenarios that should be run, are not.

What AI does: maintains the asset-level financial models in a state where their inputs are linked to the underlying contractual, operational and macroeconomic data, and where their outputs feed a portfolio-level view automatically. Sensitivity analysis runs across the portfolio in seconds rather than days. The CFO can ask, on any Tuesday, "across our portfolio, what is the net effect on fund returns of the macro scenario the IC is debating," and have a full attribution by asset.

This is not magic. It is the financial modelling that should always have been continuous, but wasn't because the cost of building and maintaining it was prohibitive. AI changes that economics.


3. Asset valuation and portfolio NAV

Current state: every infrastructure fund has to maintain valuations on every asset. For unlisted infrastructure, the valuations are done quarterly, supported by independent valuation advisors at semi-annual or annual cadence. The valuations are sensitive to discount rate, growth rate, terminal value treatment, and asset-specific operational and contractual factors. They are also material to LP relations, fund performance reporting, and any sale or refinancing decision.

The valuation refresh is heavy. A typical mid-cap infrastructure fund spends a substantial fraction of its analytical bandwidth on the quarterly valuation cycle. The work is data assembly, model running, reconciliation against the prior period, and write-up.

What AI does: maintains the valuations continuously. Each asset's valuation is constructed from the linked data layer - the financial model, the underlying performance, the macro inputs, the contractual position. When inputs move, valuations move. The quarterly valuation cycle starts from a current baseline rather than from a workstream, and the team's time goes into the judgement and the write-up rather than the assembly.

For platforms that have inbound interest in assets between formal sale processes, the practical implication is that the platform is always in a position to evaluate the inbound on the merits. Currently, most platforms cannot.


4. Acquisition due diligence

Current state: an infrastructure data room runs from hundreds of documents for a primary bid on a single asset to thousands for a portfolio transaction. The technical content is sub-sector specific. The work structure is the same. Read the contracts, validate the operational performance against the historical record, model the returns under house assumptions, assess the regulatory and contractual risks, identify the value-creation thesis.

For a fund running an active deal pipeline, the analytical capacity to seriously evaluate every opportunity is the binding constraint. Funds look at many opportunities and bid seriously on a fraction of them. The selection of which to bid on is itself a function of where the analytical bandwidth is at any given moment.

What AI does: reads the data room. The platform produces a structured view of the asset - contractual position, regulatory environment, operational performance, financial model, identified risks, valuation under house assumptions, comparison to comparable transactions - in hours rather than weeks. The deal team starts the actual investment work from a synthesised baseline. Three opportunities can be evaluated in the time one currently takes.

In the infrastructure secondary market, where transaction processes are competitive and timelines are tight, this is direct competitive advantage. Over a fund deployment cycle, it is the difference between deploying the fund well and deploying it adequately.


5. Refinancing and capital structure optimisation

Current state: infrastructure assets get refinanced on schedules driven by their original debt maturities, by interest rate environments, and by the platform's view of optimal capital structure. The refinancing process involves market sounding, lender selection, term sheet negotiation, due diligence, documentation, and closing. The analytical work involves maintaining a current view of the asset's debt capacity, the prevailing market terms, the implications of different structures for equity returns, and the trade-off between leverage and flexibility.

In practice, the analytical work for any given refinancing is built bespoke each time, even though the structural questions are the same across most assets. The platform's view of optimal portfolio capital structure across all of its assets is rarely maintained continuously.

What AI does: maintains the capital structure view. Each asset's debt capacity is calculated continuously from its current cash flows and contractual position. The market terms layer (current pricing, current covenant packages, current tenor availability) updates as new transactions print. The implications of different refinancing options for equity returns and for portfolio-level metrics (fund leverage, fund concentration) are surfaced.

For a platform with a substantial refinancing pipeline - which describes most large platforms in any given year - this materially changes the responsiveness of the treasury function and the quality of the structural decisions.


6. Regulatory and policy exposure

Current state: regulatory exposure is the central analytical problem in regulated infrastructure and a major one in concession-based assets. The regulatory determination cycle for a UK water utility is the single most important multi-year event in that asset's life. The IRA in the US has reshaped the value of every renewables and transmission asset. The European Green Deal Industrial Plan is doing the same in Europe. The Inflation Reduction Act's domestic content adders, prevailing wage and apprenticeship requirements, and transferability provisions have created a regulatory analytical workload that did not exist three years ago.

Tracking the exposure across a portfolio, anticipating policy changes, modelling the implications, and positioning for the regulatory cycle is a workload that scales with portfolio size and jurisdictional spread. Most platforms do it badly because the workload is heavy and the cross-asset integration is non-trivial.

What AI does: maintains the linkage between the regulatory framework and each asset's economics. When a policy change is announced or anticipated - a draft of the next regulatory determination, a new IRS guidance on IRA, a tariff review on a concession - the platform identifies which assets are exposed, quantifies the impact under different scenarios, and surfaces the implications for the portfolio strategy. Regulatory submissions for the asset's own determinations are supported by the data layer, with the underlying performance data, the comparable benchmarks, and the supporting analysis already in place.

For platforms with substantial regulated exposure, this is permanent productivity gain on what is otherwise a permanently heavy workstream.


7. Lender, LP and board reporting

Current state: every asset has lender reporting obligations. Every fund has LP reporting obligations. Every platform has board reporting obligations. The reports are formatted, detailed, and produced on a fixed cadence. The underlying data has to come from the asset, the operations, the markets, the financing structure, and the corporate. Producing the reports takes substantial team time, most of which is data assembly rather than analysis.

What AI does: produces the reports from the data layer. The lender's coverage ratio test at the next test date is calculated from current performance. The LP report is generated from the same data the GP team uses for management. The board pack starts from the data layer and the team focuses on the narrative and the strategic recommendations rather than on the assembly.

Across the portfolio, this is among the most immediately measurable productivity gains.


8. Operational performance benchmarking

Current state: every infrastructure asset has operational KPIs. Availability, throughput, service quality, cost per unit, safety metrics. Comparing the asset's performance against peers, against its own history, against the planned performance, is a recurring analytical workload. For some sub-sectors there are commercial benchmarking datasets (TRL benchmarks for toll roads, ATRS for airports, regulated benchmarks for utilities). For others there is not.

The analytical work involves pulling the asset's data, normalising it, comparing to peers, identifying structural drivers of variance, and presenting it in a form the asset board can act on. This is heavy work, done quarterly at most, despite being the central diagnostic for whether the asset is being run well.

What AI does: maintains the benchmarking continuously. The asset's KPIs are calculated from the underlying operational data. Peer comparisons are constructed where data is available. Variance attribution decomposes performance into the structural drivers. The asset board sees the diagnostic continuously rather than quarterly.


9. Inflation, interest rate and FX exposure

Current state: infrastructure portfolios have unusually heavy macroeconomic exposure that is mediated through specific contractual and operational mechanics. Some assets have inflation-linked tariffs. Some have inflation pass-through with a lag. Some have no inflation protection. Interest rate exposure depends on the debt structure, the hedging position, and the regulatory recovery mechanism for cost of debt. FX exposure depends on the currency of the cash flows and the currency of the debt.

Modelling the portfolio's sensitivity to macro variables, in a way that respects the specific mechanics of each asset, is a substantial analytical workload. Most platforms maintain a high-level view that is correct in direction but not in magnitude.

What AI does: maintains the macroeconomic exposure view at the asset level, linked to the contractual mechanics. The Spanish toll road's 70% inflation indexation with a 12-month lag is modelled correctly. The UK water utility's RPI-linked regulated revenues with the K factor are modelled correctly. The PPP hospital's CPI-linked availability payment with the indexation cap is modelled correctly. Aggregating to the portfolio level produces a view of net exposure that respects the mechanics, not just the headline directions.

For LP reporting, for board reporting, for IC discussions on new opportunities, having this view current at all times changes the quality of the conversation.


10. Force majeure, change in law and dispute analytics

Current state: infrastructure assets occasionally experience force majeure events, change in law events, or contractual disputes. When they do, the analytical and legal work is intense. Reading the contract, building the case, modelling the financial impact, supporting the negotiation or the litigation. This is workload that arrives in spikes rather than continuously, and most platforms staff for the average rather than the spike.

What AI does: it does not replace the legal work. What it does is collapse the data assembly that legal work depends on. The relevant contractual provisions are already extracted. The historical performance data needed to support the case is already structured. The financial model is already linked to the underlying mechanics. The legal team starts from a position where the analytical inputs are ready and spends their time on the legal substance.

For the platforms with regular exposure to these workstreams - which is most large platforms - this is meaningful capacity recovery.


11. ESG, taxonomy alignment and sustainability reporting

Current state: infrastructure platforms have unusually heavy ESG reporting obligations because their LPs and regulators demand it, and because infrastructure assets often have direct community and environmental impact. EU taxonomy alignment, SFDR Article 8/9 reporting, GRESB benchmarking, embodied carbon assessments, biodiversity, worker safety, community engagement. Each requires its own data, its own framework, its own report.

What AI does: maintains the ESG data layer alongside the operational data. Reports get produced from the data layer rather than from a parallel workstream. Taxonomy alignment is calculated from the underlying activity data. GRESB submissions get supported by the existing data layer. The team's time goes into the actual sustainability work - designing carbon reduction programs, managing community relations, improving worker safety - rather than into the reporting.


Sub-sector specific workflows

The cross-cutting workflows above apply across infrastructure. Below I work through the analytical workflows that are genuinely sub-sector specific, where the underlying analytical content differs.


Transport infrastructure: toll roads, airports, ports, rail

The central analytical question in transport infrastructure is demand. Traffic on a toll road, passengers through an airport, throughput through a port, freight on a rail network. Demand is the variable that determines revenue, and demand has structural drivers (GDP, regional economic activity, network effects, competing modes) and stochastic drivers (weather, fuel prices, geopolitical events, consumer sentiment, post-pandemic patterns that have not yet stabilised).

Demand forecasting in transport infrastructure is a discipline. Traffic studies for toll road bids cost millions of euros and run for months. Passenger forecasts for airports involve the same depth of work. The forecasts that get built for the bid become the basis for the financing case, and they go into a drawer.

What AI does in transport demand analytics: maintains a continuous forecast against actual performance, decomposes variance into the structural drivers (GDP variance, fuel price variance, demographic variance, competitive variance), and updates the forward view as the structural drivers move. The forecast that was built for the bid is reconciled against actual performance. Where it has been wrong, the structural reasons are identified. The forward view going into the next regulatory or contractual cycle starts from a current understanding rather than from the original forecast that has not been revisited in five years.

For toll roads specifically, the platform maintains the linkage between observed traffic, the tariff structure, the indexation mechanism, the seasonal and daily profile, and the resulting revenue. Variance analysis is granular - corridor by corridor, hour of day, day of week - and the underlying drivers are attributable. For airports, the platform maintains the linkage between passenger throughput, the aeronautical revenue structure (per-passenger charges, per-aircraft-movement charges, security charges, parking and aircraft handling), and the non-aeronautical revenue stream (retail, parking, real estate, advertising). The non-aeronautical revenue, which often determines the value of an airport asset, has its own analytical mechanics that the platform supports continuously.

For ports, the central analytical content is throughput by cargo type, terminal capacity utilisation, dwell time analytics, and the relationship between volumes and the tariff structure under the concession. For rail, the central content is path utilisation, track access charges, and the contractual structure with operators.


Digital infrastructure: data centres, fibre, towers

The structural question in digital infrastructure is unit economics. A data centre is a stack of capital costs (land, building, power infrastructure, cooling, IT shell), operational costs (power, maintenance, security, staff), and revenue (per-MW lease, per-rack lease, ancillary services) that have to combine into a return on a 15-25 year asset life. The unit economics differ between hyperscale colocation and retail colocation, between edge and core deployment, between markets with cheap power and markets with expensive power, between markets with grid constraints and markets without.

What AI does in digital unit economics: maintains the per-asset unit economics at high granularity. Power utilisation effectiveness (PUE) is calculated continuously from operational data. Lease yield is calculated from the contracted rents and the deployed capacity. The marginal economics of incremental fit-out are modelled against the underlying capital cost stack. The implications of power tariff changes, of grid connection delays, of cooling efficiency changes, are surfaced at the asset level and aggregated to the portfolio level.

For fibre networks, the analytical content is route economics, take-up rates by market, cost-to-pass and cost-to-connect, overbuild risk from competing networks. The platform maintains the per-route and per-market view of unit economics. For towers, the analytical content is colocation rates, lease economics with mobile network operators, and the coverage and capacity dynamics that drive new tenancy demand. The platform maintains the per-tower and per-market view.

The grid connection issue has become unusually important for both data centres and renewables-linked digital deployment. In several major markets - Northern Virginia, Dublin, parts of the UK, Frankfurt - grid connection is now the binding constraint on data centre deployment. Tracking the grid connection status across a development pipeline, with realistic timelines and realistic capacity assumptions, is a workload that has emerged in the last 24 months. AI lands on it directly.


Social infrastructure: PPP/PFI, hospitals, schools, justice, accommodation

The structural mechanics of social infrastructure are availability-based. The asset is built, the public sector counterparty pays an availability payment over the concession life provided the asset is available and meets the service specification. Performance deductions reduce the payment when service standards are missed.

The analytical workload is contractual interpretation, performance management, and lifecycle planning. The contracts are dense - a typical UK PFI hospital has a project agreement of 500-1500 pages with detailed performance regimes, deduction mechanisms, change protocols, and hand-back provisions. The performance data flows continuously and has to be reconciled against the deduction regime. Lifecycle planning - when to replace what asset element to maintain availability and avoid deductions - is a multi-decade optimisation problem.

What AI does: extracts the performance regime from the contract into a structured form, applies it to the actual performance data, calculates the implied deductions, reconciles against the actual deductions claimed by the authority. Lifecycle planning is supported by the linkage between asset condition data, the replacement cost stack, and the implications for performance. Hand-back planning, which becomes a major workstream in the final years of any concession, is supported continuously rather than as a discrete project.

For platforms with portfolios of social infrastructure assets, where the analytical workload per asset is moderate but the asset count is high, the productivity gain from this layer is substantial.


Water and waste

The structural mechanics in water and waste are regulated. In the UK, Ofwat sets price controls on five-year cycles. In other jurisdictions, the regulatory frameworks differ but the analytical structure is similar - the regulator sets a return on capital, a cost recovery framework, and a service quality regime, and the asset's economics are governed by how it performs against these.

The analytical workload is regulatory submission, performance management, and totex optimisation. Regulatory submissions are heavy - for AMP8 in the UK, water companies have submitted business plans of thousands of pages with extensive supporting data. Performance management against the regulatory ODI (outcome delivery incentive) regime is continuous. Totex optimisation - choosing how to allocate capital and operational expenditure to maximise the regulated return while meeting the service requirements - is the central management question.

What AI does: maintains the regulatory submission data layer continuously. Performance against ODIs is calculated from the underlying operational data with full traceability. Totex allocation decisions are supported by the linkage between the cost stack, the performance impact, and the regulatory return. The five-yearly submission process becomes a continuous one, with the data and the analysis already in place when the regulator's deadlines arrive.


Grid and transmission

The structural mechanics in grid and transmission are regulated, plus increasingly governed by congestion and capacity dynamics. The investment thesis in grid infrastructure has changed materially over the last three years - the energy transition has shifted transmission from a steady regulated business to one with unusually heavy capital deployment requirements, and the bottleneck on grid capacity has become a strategic issue across most developed economies.

The analytical workload includes regulated return analysis, capital deployment planning, congestion analytics, and increasingly, the interface with renewable generation and storage assets that are exposed to the same grid capacity dynamics.

What AI does: maintains the regulatory framework as for water and waste. In addition, the platform maintains the linkage between transmission capacity, congestion patterns, and the value implications for generation assets that share the same nodes. For platforms that own both transmission and renewable generation, this cross-asset analytical capability is direct value.


What the operating model looks like once this is in place

Infrastructure is the sector where the heterogeneity of the asset base creates the largest analytical drag, and where the productivity unlock from a unified analytical layer is correspondingly substantial. Once the data layer is connected and the analytical layer above it is functional, the operating tempo of the platform shifts in several specific ways.

The asset management function shifts from per-asset to portfolio. The asset manager covering the Spanish toll road and the asset manager covering the UK water utility currently operate in parallel, sharing workflows only at the level of formal review processes. With the analytical layer in place, the cross-asset analytical work - portfolio-level inflation exposure, portfolio-level regulatory exposure, portfolio-level capital allocation - happens continuously rather than as a discrete project.

The investment function shifts from constrained to opportunistic. The deal team can evaluate multiple opportunities at the same depth they currently apply to one. In the infrastructure secondary market, where transaction processes are competitive and the analytical workload is the main bottleneck on participation, this changes the platform's deployment capability.

The financing function shifts from periodic to continuous. The treasury team has a current view of capital structure across the portfolio. Refinancing windows are identified opportunistically rather than reactively. Hedging decisions are made against a current view of underlying exposure.

The reporting function shifts from assembly to narrative. The quarterly NAV cycle, the LP reports, the lender reports, the board packs - all start from the data layer and the team's time goes into the strategic content.

The technical and engineering disciplines stay human. Asset condition assessments still require qualified engineers. Regulatory submissions still require specialist counsel and economic consultants. Investment decisions still require experienced principals. None of this changes. What changes is the ratio of analytical time to assembly time across the entire platform, and the cadence of analytical work to the cadence of the underlying business.


Where the platform layer fits

What I have described is a connected data layer covering contractual frameworks, financial models, operational data, regulatory environments, market data, capital structure, and ESG inputs across a heterogeneous asset base; a set of modular analytical engines for the workflows above; and an interface that lets the team get answers to cross-asset questions without going through a multi-person workstream first. Building all of this is a multi-year effort, and platforms that try to do it in-house run into the same problem as elsewhere: the existing data layer is the thing that needs replacing.

This is what we built Honeycomb for. The platform ingests infrastructure data - contracts, financial models, operational data, regulatory filings, market data, ESG inputs - into a unified knowledge graph. It maintains a queryable digital twin of every asset, regardless of sub-sector. The modular analytical engines cover the cross-cutting workflows and the sub-sector specific workflows above: contract intelligence, financial model maintenance and scenario analysis, valuation and NAV, due diligence acceleration, refinancing and capital structure, regulatory exposure, lender and LP reporting, performance benchmarking, macro exposure, force majeure and dispute analytics, ESG reporting, plus the sub-sector specific analytical content for transport, digital, social, water and waste, and grid. All outputs are traceable to source. The interface is natural language.

The architecture is the same one we built for mining, upstream oil and gas, and renewables, because the structural problem is the same - fragmented technical, operational and contractual data, analytical workflows that depend on cross-domain reconciliation, decisions that depend on synthesis. The specific analytical engines are different because infrastructure is unusually heterogeneous, but the foundation is shared.

The platforms using Honeycomb are not buying it because it is novel. They are buying it because the alternative - continuing to run a multi-billion-pound infrastructure platform on the analytical infrastructure that was adequate when they were a few hundred million - is no longer competitive. The teams whose analytical cadence matches their data cadence will outperform the teams whose analytical cadence matches their reporting cycle. The window for getting on the right side of that gap is open now.


Where to start

If you develop, finance, operate or invest in infrastructure, the first thing to do is not to commission a digital strategy review. It is to take a single workflow you currently run more slowly than you should, and run it through the platform.

Some workflows that produce immediate, measurable results:

  • A contractual review of a single asset's concession agreement. Run the contract through the platform and compare what comes out to your existing internal summary. See how much that summary is missing.
  • A scenario analysis on a portfolio-level macro question. Pick the question your IC is currently debating - inflation exposure, interest rate exposure, regulatory exposure on a particular framework - and run it across your portfolio with the platform.
  • A target asset's data room. Take an opportunity from a recent process or one currently in market. Run it through, get the synthesised view, and see what the cycle time looks like compared to the manual process.
  • A portfolio NAV refresh against the most recent market data. Run the valuation across your assets with current macroeconomic inputs and current operating performance, and compare to your most recent formal NAV cycle.
  • A regulatory exposure assessment for a specific framework - IRA in the US, the UK water price control, a specific concession-based framework in your portfolio - and see how the platform handles the cross-asset implications.

Free trial at honeycomb.sirca.io. Upload a single asset, run a workflow you currently run, and decide for yourself whether the cycle time looks the same. The point is not to be sold a platform - the point is to see what your existing workflow looks like with the synthesis layer solved. That is the only test that matters, and it is the test the platforms ahead of the curve are already running.