The Container Port Performance Index is now in its fifth edition and has achieved a certain prominence as a document to be scrutinised on release. But is it being used for the right purpose and is it robust enough to justify its title?

A docked container ship.

Would the construction of the Index benefit from a Working Group comprising representatives from key industry sectors that can aim to refine methodology and generally iron out problem areas?

The World Bank 2025: The Container Port Performance Index 2020 to 2024: Trends and lessons learnt, has recently been released following joint preparation by the World Bank and S&P Global Market Intelligence.

It is the fifth edition of this work which while called an Index is effectively a ranking of 400+ ports worldwide related to “time expended in a vessel stay in port.” This latest edition also reviews trends over the five-year time period since the Index was first released.

The core function of the Container Port Performance Index (CPPI), however, is to provide a comparative global assessment of container port performance by focusing on vessel time in port using this as the key metric of performance. Specifically, it focuses on vessel time in port and the number of containers moved with the underlying methodology combining what the authors call administrative and statistical approaches.

The administrative approach provides a direct measure of port performance using vessel time in port, adjusted for operational variables – for example, port time is defined with this being adjusted by ship size categorised into five predefined groups by TEU capacity and call size: categorised into ten predefined groups by number of container moves per port call (load + discharge + restow). The statistical approach applies “multivariate factor analysis (identify patterns between multiple variables) to derive latent performance dimensions from a set of correlated indicators.”

The methodological explanations do remain unacceptably vague, however, and clear examples of calculations should be shared to allow third parties to examine the theoretical approaches in detail.

The commentary accompanying the latest index states in terms of its purpose and its rationale: “The CPPI is intended to serve as a diagnostic and planning tool. The aim is not to benchmark ports against one another but rather to help port authorities, governments, and private stakeholders identify where and how improvements are taking place, and under what conditions. It provides a starting point for constructive dialogue on investment, reform, and innovation in port infrastructure and operations.”

Theory and Practice

In reality, however, what happens is that the ranking of one port against others is arguably the main way in which the Index is interpreted. The competitive element rises to the fore with the Index when released prompting a flurry of News Releases highlighting good performances/improved rankings from one year to the next etc. Such releases confirm that the more analytical aspects of a position in the index are often overlooked in favour of highlighting the position achieved relative to other parties.

A smaller container ship at anchor.

The CPPI is based on vessel time in port, which includes time at berth, as well as time spent waiting at other berths or at anchor

This becomes even more apparent when the results are basically weaponised – for example, one party suggesting that two ports positioned in the lower rankings with the same operator in place should lead to “eliminating this operator” when it comes to assessing the candidates to take up an operating role in another port.

The flaw in this argument, however, and also generally for the purposes of comparison is that the methodology behind assessing containerport performance needs to be entirely fit for purpose. And particularly from the terminal operating sector – made up of the entities that are in the front line of container handling – there is a growing body of criticism in this respect.

One clear problem is that port performance is used as a proxy for container terminal performance. This is unfortunate as terminal operators very often cannot control overall port performance. More should be done to distinguish between the two.

If the Rankings in the Index were to serve merely as an indicator for the intended purpose of providing “…a starting point for constructive dialogue on investment, reform, and innovation in port infrastructure and operations,” then no doubt there would be less criticism. But as more editions of the Index have come out, and it has gained recognition as method of comparative performance measurement, it has increasingly been utilised in a competitive context – by port administrations with terminal operators, between port administrations and to a certain extent by shipping lines with ports.

This trend line has, in turn, put the methodology behind the Index under the microscope and significant criticisms have emerged that contend the criteria employed are not sufficiently robust to justify such an Index and the ranking contained therein.

Port Strategy has received criticisms of the Index from a range of parties, the majority of which wish to remain anonymous, and for the sake of uniformity PS has decided not to specifically attribute various comments and remarks but to present the feedback on methodology recorded under various headings, as featured in Table 1. The feedback is based on both the latest 2024 version of the Index and the previous edition: The Container Port performance index 2023.

Table 1: Concerns raised in conjunction with the methodology employed for the compilation of the container port performance index
METHODOLOGY: KEY AREAS OF CONCERN RAISED
1. Single Productivity Measure
  • Focuses on vessel time in port as the core metric of performance. Port performance is more than just port hours. It also includes reliability, flexibility, connectivity, sustainability and more.
  • A comprehensive measure of port productivity should factor in vessel turnaround time at berth, crane rate, number of shipping connections and asset utilisation in addition to port hours.
  • Port hours can also be impacted by external factors outside of a port’s control, such as ship arrival patterns and bunching – all having the possibility to influence port hours even if the port itself is performing well.
  • In comparison, the World Bank’s Logistics Performance Index considers six dimensions for evaluation, covering efficiency, quality of trade, ease of arranging shipments, service competence, track and trace capability and timeliness.
2. Clarity of Definition between a Port Versus a Terminal
  • Comparing a single terminal in a port with a multi-terminal set-up is inherently misleading and inevitably distorts results.
3. Comparability Across Port Types
  • Transshipment terminals face the additional challenge of managing the different arrival schedules of main-line vessels and feeder vessels to connect the cargoes optimally. Transshipment hubs face inherent operational asymmetries when compared against gateway ports, as they may at times need to provide  time-recovery service and stowage corrections. Greater clarity could come from ranking ports within peer categories.
4. The Weighting System as Employed Favours Ports Handling Larger Vessels, Disadvantaging Those with Smaller Vessel Profiles
  • Ports handling both main haul vessels (e.g. TransPacific and Asia-Europe services) and regional feeders are at a disadvantage compared to ports that handle predominantly main haul mega vessels. The case is put that normalising vessel size profiles allows for fairer comparisons between ports.
5. The Exclusion of Certain Time Components
  • Consistent rules should be applied when including or excluding time components to avoid bias.
  • For example, bunkering may occur either before or after port operations. However, the report excludes bunkering time if it takes place after port operations but includes it if it occurs before operations.
6. Publishing Rankings with Miniscule Differences Between Ports
  • The case is put that port rankings would be more meaningful if they are grouped into performance clusters. Minor numerical differences such as 10 versus 11 average waiting hours do not always represent a statistically different variation in port performance.
7. Expertise View
  • In the Logistics Performance Index, used to rank global logistics performance, the World Bank incorporates expert views in addition to objective hard data, to score logistic performance. In complex systems like logistics and port services, hard data alone may not paint the full picture. The input of experts will supplement and provide a more balanced measure of performance and thereby enhance the quality of performance ranking.
8. Ports Serving One or Two Major Customers or Services Gain an Advantage
  • Such ports are better able to coordinate vessel schedules and thus lower the incidence of vessel bunching (which invariably results in waiting time).
  • It would be more meaningful if the ports are classified in the various categories and ranked accordingly within these categories.
9. Port Service House
  • Ports that do not operate on a 24/7 basis and/or are restricted in terms of night time navigation are disadvantaged. This also has the potential to impact waiting times at anchor.
10. Differences in the Impact of Exogenous Factors (e.g. COVID-19) Across Regions
  • Exogenous factors which impact port productivity may not be uniform across regions. The CPPI 2020-2024 reported that COVID-19 seem to have a greater impact on North America ports than others. Hence in different time periods, local or global disruptors may have varying degrees of impact on ports and without knowledge of such disruptions, readers may have a distorted view of the ranking.

Disruptive Factors

Factors such as those in Table 1 are identified as disruptive when in pursuit of an accurate measure of container terminal performance. A central part of the problem here seems to be that on the part of many parties how the Index is used is not how the World Bank intends it to be used – as a diagnostic and planning tool in what can be termed as a big picture context. It is though perhaps just human nature that it will be utilised quite extensively in a competitive ranking context. The thought occurs, is this why in the latest edition the format of the ranking is not quite so distinct as it has been in previous editions – to try and achieve a shift away this type of usage?

The fact remains, however, that in numerous detailed contexts the Index is not perfect and so long as it is used in a comparative context then it will lead to issues. The factors listed above as problematic are by no means exhaustive – there are others in addition to these. Overall, there is significant scope for:

  1. For the message to be made clearer as to what the authors intend the index to be used for.
  2. To take a fresh look at the methodology employed in its construction in pursuit of a more robust result.

This would additionally serve to strengthen the new coverage in the 2024 report which examines changes in port performance over time and aims to provide stakeholders with insights into whether a given port’s CPPI has increased, declined, or remained stable. The authors state: “This marks a significant evolution from annual snapshots to a longitudinal perspective, enabling a deeper understanding of the structural patterns in container port efficiency.”

But only, of course, if the projections come off a solid base.

Wide view of several container ships at port with gantry cranes overhead.

It is suggested that using the same methodology to measure gateway ports and transshipment ports is a flawed approach