top of page



Long-term performance improvement rates are defined as the “… trend of non-dominated (i.e. record-breaker) performance data points for the overall technology domain (not for individual product generations, individual companies or components)”. It's important that the metrics be constructed as a measure of technical benefit over technical cost- for an combustion engine, it can be W/L, W/Kg or W/$. The rates reflect the improvement in the envelope of technical performance, not average performance, for a technology as we can trade one for the other.

From the empirical data we have, for a single technology most of these metrics tend to improve at similar rates (Benson, 2014, page 208). This doesn't mean that a technology is equally good on all metrics. Certainly, Lithium ion is better at energy density and Lead-acid is better at cost. It just means the metrics for one technology improve at a close enough rate.

An understanding of improvement rates can improve investment strategy in a number of ways. See below a list of potential impacts to your investment strategy.

  1. Ks help analyze development time scales. Private sector players are optimized to spot and develop short horizon technologies. Organizations can prioritize important technologies with longer time horizons which private sector players pass on with more accurate return expectations than SMEs/popular narrative.

  2. From a purely financial perspective, fast moving technologies tend to have better returns profiles and possibly shorter development cycles than slower technologies. Advanced research projects in those areas can provide faster payoffs and faster development of capabilities. These technologies are ideal for VCs, SMBs, Corporates etc for which financial returns are important

  3. Slow moving technologies would be good candidates for budget cuts if need be while limiting the impact on technical advancement in a given technology area. These are also areas to start looking for functional alternatives/ substitutes if they are critical.

To sum up, depending on the criteria of interest for a given technology (such as size, weight, power, cost or availability), a forecast on the improvement rate can help guide the investment strategy to achieve specific targets and capabilities. Thus, we will have a higher probability of picking winners and losers for technologies that have not matured yet, helping optimize investment strategy and time horizon. 


Technology decision makers often face the choice of investing in one core technology (believing it to be crucial to their mission/ existence), an alternate technology (a key “disruptor”) or multiple technologies to hedge against failure. Information on improvement rates is crucial in making best choices in such allocation decisions. Although such decisions are made at a point-in-time, it needs to be recognized that they deal with outcomes that are changing (sometime rapidly) with time.

Often times for a key alternate technology (“disruptor”), subjective expert opinions assess when the specific technology will be ready. There are fundamental challenges with this approach in addition to the normal issues with subjectivity. New technologies are often over-hyped (to the extent that there is now a well-known model called the Gartner hype cycle). The creators of the technology and those covering them often have an incentive in overstating when they will be ready[1]. Decision-makers can be led astray by lab vs real product behavior and leadership can begin to dangerously believe in “reality distortion” due to their or their team’s overly high self-assessment.

This can lead to large losses for multiple stakeholders including investors, consumers/users, employees and often times general public. Some examples include-probable over-valuation of Uber and others due to very wrong estimates of viability timing for autonomous cars, medical harm due to Theranos, busts in virtual reality and other cases which are essentially about incorrect, overly eager, estimation of timing for potentially valuable technologies.

All technologies require relatively long periods of technology development before they become market ready and an under-estimation of the time to completion can lead to loss of trust, serious disillusionment and withdrawal of a support causing collapse of critical mass of knowhow (for instance, the Artificial intelligence “winters”, the collapse of cleantech in US). Furthermore, investing in a new technology too early can lead to substantial costs (due to long investment periods without revenues) and ultimately unrealized benefits as the competitors may catch-up quickly and be more effective at marketing as well as establishing a “dominant” design (the innovators of the smart phone -IBM Simon and other innovators only received losses for their innovative behavior while the well designed and well-timed iPhone made a fabulous fortune for Apple). On the other hand, there is also a serious risk of under-estimating an upcoming technology. This can be especially critical if the upcoming technology threatens the core-technology underlying the main product of the incumbent- examples include Blockbuster, Nokia, Research in Motion(Black Berry) Barnes & Noble and others.

Thus, it is important for technology managers of all kinds to make accurate estimates of technology timeliness/readiness. It follows that most technology managers consider the rate of improvement of performance for a technology as an important indicator of the potential future importance of that technology[2]. Limitations on acting on that correct intuition include the lack of objective information about rates of improvement and the lack of clarity about the methodology to use that information most effectively.

[1] Jeffrey Funk 2019, “What’s Behind Technological Hype?”,

[2] Hoisl, Karin, Tobias Stelzer, and Stefanie Biala. 2015. “Forecasting Technological Discontinuities in the ICT Industry.” Research Policy 44 (2): 522–32.

Why TIRs?: About
bottom of page