Renewable intelligence from Oxford University

Precipitated by the convergence of the Ukraine crisis and the existing net zero commitments under the Paris Agreement, governments around the world have made energy sovereignty and decarbonisation key policy ambitions.

The leading solution to achieve both national energy independence and the transition to clean energy has been the adoption of renewable power. And it’s happening far faster than expected, with the IEA reporting that ‘total capacity growth worldwide for renewables is set to account for over 90% of global electricity by 2030, overtaking coal as the largest source of electricity generation along the way.’

But despite this momentum there has been a persistent problem with the main renewable vectors of solar and wind power; they are by their very nature, intermittent in generation while demand for electricity is rather more continuous, at least between morning and night.

As a consequence, energy storage has become the critical enabling technology to make renewables work, in particular grid scale batteries that act as a buffer between the intermittency of power generation and the immediacy of demand. But this technology has, until now, been hampered by the cost and durability of large storage investments because batteries degrade with every cycle of their operation.

As a consequence, operators planning battery storage into grids typically factor in the expected deterioration of the battery, either by oversizing it at the planning stage to provision for its progressive reduction in capacity, or ear-marking capital reserves to add back in capacity at some future waypoint in the system’s lifecycle. If unchecked, experts assess that by 2030, half of battery cell investment in the UK’s utility-scale energy storage will be spent on the replacement of degraded cells (1) – at a total cost of $700million (2).

Not only are the costs of battery replacement eye watering, but so too are the environmental consequences of this rate of battery replacement. Without intervention the estimated carbon cost of replacing grid batteries by 2030 will be somewhere in the range of 4.3 to 28.8 million tonnes of CO2 (3, 4); at its upper limit, this measure of CO2 equates to 150% of the carbon dioxide emitted by UK heavy goods vehicles (5). For a solution intended to contribute to a net zero ambition, this battery replacement cost in carbon and dollars is patently not viable.

Oxford Intelligence

Thankfully, a solution researched at Oxford University, is now available as a novel technology platform developed by Brill Power to tackle the underlying challenge that causes batteries to degrade. At its heart is a solution to the core problem that every battery pack is, without intervention, limited to the performance of its weakest cell.

Brill’s Battery Intelligence Platform overcomes this limitation by ‘actively loading’ individual cells or modules – that is, charging and discharging each cell or module contingent upon its individual health. This allows all of the available energy from the cells to be used, thereby extending battery lifetime of grid-scale energy storage systems by up to 80% for the most common thermal management approach (6).

The significant battery lifetime extension of up to 80% means operators will no longer have to pre-provision for redundancy or plan for in-lifecycle battery replacement. This represents a cost reduction of 29% for a typical system installation while also reducing the carbon costs associated with earlier replacement of batteries by 50%. What’s more, actively managing the battery cycling allows the system to detect and isolate malfunctioning cells, making its operation more stable and safe.

The hardware also comes complete with an integrated power conversion capability which means an existing system can be easily scaled up without the additional cost or complexity of DC/DC converters.

Thankfully, a solution researched at Oxford University, is now available as a novel technology platform developed by Brill Power to tackle the underlying challenge that causes batteries to degrade

To manage Brill’s breakthrough hardware, the BrillCore, the company has developed the world’s first universal operating system for batteries. BrillOS future-proofs operators by virtue of being agnostic to the battery chemistry, while also enabling over-the-air updates to the system.

Another challenge for large-scale battery systems is knowing and predicting its state of health and performance. BrillAnalytics is Brill’s in-house developed, cloud-based data platform, which provides end-to-end connectivity and insights – from cell to screen. The platform records and computes battery performance data to provide predictive insights that allow the streamlining of maintenance and the optimisation of the battery usage.

At its core, Brill Power’s battery technology advance has created a critical link between renewable power and its capacity to act as an on-demand replacement for legacy power generation and with it, the picture for grid storage batteries becomes clearer.


If you’d like to know more about the groundbreaking work being undertaken by our clients, Brill Power, please drop an email to


Sources and assumptions:
1 Timera Energy: Battery investment cycle protects margin downside (2022)
2 Assumptions – Battery cost: $176/kWh, 4h systems (US Department of Energy, 2020)
3 MIT Climate Portal, 2022
4 Assumptions – Deployed capacity: BNEF (2022)
5 UK Department for Transport (2022)
6 Based on lifetime simulations for a 1MWh battery system by J.M. Reniers and D.A. Howey, Applied Energy, 2023

Rehabilitating AI’s Reputation

It’s rare (if not unheard of) for client technology to be charged with an unlimited rap sheet of crimes against humanity that range from posing an existential threat to health to bidding to take over the world, but that’s exactly what’s happened to the the entire machine learning and artificial intelligence sector since the launch of Chat GPT 4 on March 13 earlier this year. The hullabaloo around generative AI technologies, or GAI for short, is separate and distinct to the many instances of Artificial Narrow Intelligence, or ANI developed by many companies to automate specific and defined processes, such as disease mapping or natural language processing applications. The problem is, however, that nuance is lost in the wider public debate and thus any company with even a tangential association with any form of AI should be prepared to expect some form of reputational impact on their business.

“GPT-4 is exciting and scary,” New York Times columnist Kevin Roose wrote, adding that there two kinds of risks involved in AI systems: the good ones, i.e. the ones we anticipate, plan for and try to prevent and the bad ones, i.e. the ones we cannot anticipate. “The more time I spend with AI systems like GPT-4,” Roose writes, “the less I’m convinced that we know half of what’s coming.

The torrent of Frankenstein headlines and ramping public concern has not been a phenomenon solely restricted to the UK. The World Economic Forum and Statista report that 27% of the world’s population has a fear of the technology ‘going rogue’ and while the reputational outlook may be bleak in the UK, our concern domestically indexes well below the worries expressed in markets such as India, China, Germany and the US.

The potential liabilities for businesses range from investability, broader commercial viability to a developing national and supranational appetite for additional and costly oversight or governance.

For companies caught in the eye of the storm, there are a number of practical PR steps in the five point playbook that can be adopted to help limit the potential damage as follows:-

Build distance

The AI moniker has become toxic. If your organisation is lucky enough to not have the terminology visibly integrated into your identity or brand, consider a ‘find and replace’ strategy that removes references to artificial intelligence with machine learning so far as it is technically and commercially accurate. While there are of course semantic differences, the terms can often used interchangeably; ML conveys a more passive and benign technology

Ring your fence

A great deal of AI applications are based on finite and ring-fenced use cases in narrow application. Be clear that your code has no capability, no matter how advanced, to ‘go rogue’ and obviate human oversight. Script a short, defensive summary in layman’s terms to define the scope of operation of your AI – what it does – and more importantly, what it doesn’t do, particularly in ANI sectors. If it needs to be technical, keep it simple and sense-check it with audiences that have no domain expertise.

As a general rule of thumb, hold your defensive strategy for use in responsive situations; however, if your business has been reputationally imperilled by recent events, consider using your positioning strategy more actively, for instance as a temporary banner on your website.

Engage Positively

When the external context is challenging, the response of many businesses is to retract by reducing their external communications output. But by far the best strategy is to do the opposite. Work through all your points of liability and develop counterpoints that are authentic and defensible. For instance, ‘AI will take away gainful employment’ can be positively countered with the assertion that ‘AI improves the quality of workplace endeavour by removing the repetitive, low grade tasks from employees, allowing more time to focus on rewarding, value-add activities.’ Expect every well-rehearsed brickbat to be dug out and hurled at you – and be prepared for all of them.

Once you are confident in your positive messages, take a look at task number #4. Then get ready to take your story out. If the media temperature is still hostile to good news about AI, deploy your stories on your social channels instead. Maintain community relations in shared media, but don’t engage in lengthy online debates with trolls who will never shift their opinion.

Commit to a Charter

If your business has not addressed the cornerstone AI issues yet, now is the time to do so. Be sure you have solid company protocols around ethics, a positive and open-minded disposition towards regulation, privacy, accountability, transparency, bias and safety – all the potential areas of liability. The best way to encapsulate these policies is to write a short, simple and matter-of-fact company charter that sets out what your business will undertake to do in all these critical areas of engagement

Work Inside Out

Your staff are your reputational bell-weather and your best ambassadors. So integrate them in all these processes. If your charter or your positive communications messages don’t play internally, then its not fit for external consumption


The Overlooked Science of Subjective Engineering?

The traditional perception of engineering is that it deals only in absolutes. But as the automotive industry has discovered, subjective engineering is equally critical. Jonathan Maybin, who leads automotive attribute engineering services at Whistle client, HORIBA MIRA, explains more.


(Article first published in the Engineer, October 5, 2022)

In a global car market worth an estimated $3.8tr annually, it is the brand that often drives customer choice, not only defining an OEM’s success but enabling them to charge a premium for performance. A car brand is however more than skin-deep and extends beyond the superficiality of car styling.

The subjective attributes of a car – how it handles, how it imbues a driver with confidence in its consistency, or how safe or exciting it feels – extends these characteristics right down to the subconscious level for a prospective buyer and defines success in this intense competition to meet ever-more discerning consumer requirements.

This ‘engineering to consumer preference’ is made all the more complex by attribute array for any given car model being myriad, extending potentially to thousands of characteristics – many of which may conflict with one another.

Delivering, for instance, a ride characteristic that a driver might regard as refined or luxurious could restrict the ability to deliver a feeling of high-performance feedback for a more active driving experience.

In order to deliver a finished vehicle with characteristics that are on-brand and reflect the virtues of a particular marque, the traditional approach relied on objective engineering processes. Only once a prototype had been developed could the subjective tuning begin as a test driver climbed behind the wheel to assess how a new design behaved dynamically.

Unfortunately, the relationship between objective and subjective engineering is not linear, comparable, nor predictable. Form may precede function, resulting in a vehicle that exhibits an unrewarding and forgettable user experience, leaving those behind the wheel ‘numb’ and creating an issue that requires significant investment and resource to resolve.

In recent years, with significant advancements in simulation and simulator hardware capability, the industry has made a step forward in the adoption of virtualisation via the use of driver-in-the-loop (DiL) simulators, accelerating the process of adding a human into the loop at an earlier stage in the virtual series of the development process.

The successes of DiL simulation are enabling a more rapid adoption of new and novel approaches, disrupting the traditional industry processes, particularly in the areas of hardware-in-the-loop development and autonomous vehicle validation.

This is currently where extensive physical product validation is not only driving cost but limiting the speed and roll-out of new technology. The result is a reduction in the product development risk, the time to market, the carbon footprint associated with building and testing multiple prototype vehicles and also improvements in ultimate product performance.

With a suite of simulators at HORIBA MIRA, our subjective attribute engineering puts the human-in-the-loop at the very start of the virtual series.

The use of sophisticated simulator tools gives the attribute development engineer the freedom to make accurate and unlimited attribute comparisons and refinements quickly and at lower cost, often with the mere flick of a switch.

This enables multiple informed subjective decisions to be made during the virtual phase of design, as the engineer works to balance conflicting attributes to achieve an optimised vehicle.

Built on the strength of the team’s extensive attribute engineering capabilities, HORIBA MIRA is paving the way forward with its novel approach of scenario fuzzing and multi-pillar testing to help reduce the burden of physical ADAS testing without a reduction in product performance.

The subjective engineering approach starts with a PALS assessment – a product attribute leadership strategy – that benchmarks class competitors with which the new vehicle must compete. A jury team will assess all competitor vehicles in a cascade of attribute levels, grading them across a range of performance from class leaderamongst leaders to competitive and so forth.

With a fully tiered matrix of competitor attributes that break down into more granular details – for instance, grading handling attribute parameters when cornering in steady state or transient conditions – the new vehicle development team can set target attributes for the new car that will both meet brand expectations while also performing against the class competition.

With targets set, the subjective engineering process leads the vehicle’s development.  The virtual series is now so powerful that complex components or systems such as active anti-roll bars, rear wheel steer and torque vectoring can all be developed and calibrated earlier in the vehicle’s development cycle, enabling the engineers to do more in a shortened development cycle.

This cascade of subjective attributes gives rise to enormous scope of optimisation that may demand 1,500 or more variables are assessed and subjectively optimised before a single nut or bolt is manufactured.

As automotive engineering scales a revolution in new technologies – spanning electrified powertrains, connected and autonomous functionality, new human-machine interfaces and wholesale revisions to vehicle interiors to respond to new functional requirements – subjective attribute engineering has become ever more important to accommodate these developments without waste, time or cost.

Find out more about HORIBA MIRA here

Start-ups and Slow-downs: Delivering growth in a declining market

Investment in UK tech start-ups has increased at record rates as the global economy slows after the coronavirus pandemic. 

Between January and May 2022, British technology companies raised more than £12bn in investment. This record fundraising brought UK start-up investment to the top of the pack, trailing only behind the US, and surpassing Chinese investment for the first time in recent history.

Home to more tech start-up unicorns than any country other than the US and China, the UK is experiencing unprecedented growth in its tech sector. 

However, this meteoric rise is set against the backdrop of a post-pandemic economy, and signs of a fall back to earth are imminent. The global market is slowing, and base-rate rises by central banks means loans are increasingly expensive.

Many start-ups formed during the pandemic (like grocery delivery services and virtual conferencing platforms) had sky-high valuations but are now fighting to prove their post-pandemic worth. 

Venture capital investment firms, who too raised unprecedented funds during the pandemic, are increasingly sceptical of backing start-ups and unicorns as it becomes more expensive to invest. A now common trend in UK post-pandemic VC is “haircutting”, when start-ups raise at a higher rate than previous rounds of investment but still receive far less funding than anticipated based on valuation.  

What does this mean for start-ups and VC firms in the UK?

There are now more UK tech start-ups than ever, funded by an extraordinary amount of domestic and international money. But this crowded marketplace is backed by global investors, and overseas markets are experiencing a distinct downturn.

So, start-ups must separate themselves from the pack, establishing not only feasibility but also prominence in a populous market.  Meanwhile, VCs must be more selective in their investment while vying for the most viable start-ups (and ensuring that the start-ups they have already funded succeed). 

In both cases, public relations and reputation management can contribute to success. For start-ups, strategic relationship- and identity-building serve to pinpoint their purpose. In the UK, investment in purpose-driven start-ups has increased to $3.5bn in the past decade.

Additionally, there is a positive correlation between strategic reputation management and revenue, key to building credibility as start-ups seek VC investment. Media coverage of start-ups has been shown to increase VC investment, performance during IPO, and long-term survival chances. 

For VCs, establishing marketplace security through reputation development is key to succeeding in a post-pandemic economy.

Reputable VCs, or VCs that have a strong public presence that reflects the success and trustworthiness of the firm, benefit from less costly and larger fundraising and are more likely to invest in successful start-ups.

Reputation and public relations have been found to be more significant indicators of VC success than the age of the firm, its investments, or its connectedness in the market. 

As liquidity tightens, it can seem counterintuitive at first glance to spend valuable cash to fortify reputation and fund or portfolio company profile; however, investment in PR might very well be the difference between success and failure as the market starts to turn.


UK Tech News

New Statesman

Growth Business UK

Journal of PR Research

Corporate Reputation Review

Journal of Financial Economics



Coming Soon. Or Here Already? Quantum Computing-as-a-Service


Alongside Oxford’s commitment to advancing AI technologies, delivering a quantum compute capability to the commercial sector in a service format is an equally effervescent field of enquiry both the city and the university.

As such, Oxford is reckoned to be the UK’s largest and most diverse centre for quantum research with 38 operational research teams focused on harnessing quantum effects in a new generation of devices that will outperform existing classical computers.

And some of this endeavour has found its way to real-world application, most notably with Oxford Quantum Circuits, which operates the UK’s only commercially-available quantum computer in the UK. OQC’s Quantum Computing-as-a-Service (QCaaS) platform has been put to work, most notably in a project managed by Cambridge Quantum to address one of the most pressing challenges of the quantum era, the threat to security encryption.

Cambridge Quantum used Oxford Quantum Circuits’ QCaaS platform to validate their cybersecurity approach by using verifiable quantum entropy from quantum computers to generate superior cryptographic keys.

Launched in July 2021, Oxford Quantum Circuits’ confident march into the commercial PAYG market identified a range of enterprise applications where a quantum compute capability has the potential to generate exponential gains, including supporting Oxford’s disciplinary focus on AI, with quantum offering the capability to develop yet more powerful algorithms with endless application.

But the reach of quantum will extend far beyond the application of AI. The early adopters of QAAS services are expected to be the pharmas in their search for better predictive health models and therapies, financial institutions seeking more reliable assessments of trading and risk management strategies, energy generators, especially in fields such as battery chemistry and battery management systems (BMS) and organisations concerned with cryptography and national security.

As part of the technology maturity cycle however, the enterprise application of QAAS is perhaps still some way off. Amazon Bracket, the environment designed to enable the testing and validation of quantum algorithms is currently dominated by researchers and government agencies, such as the Italian National Institute for Nuclear Physics, rather than a phalanx of commercial companies.

But for sectors where the dividend is significant, the adoption rate will pivot quickly. The AI/ML experience tells us that the banking and financial sectors typically have the muscle to invest in nascent technologies, especially as the wins for first movers can be significant while less adventurous competitors languish, limited by the constraints of classical computing.

Thus expect QAAS customers in the first instance to come from the banking and finance services industry as they focus on increasing the speed of trade activities, transactions, and data processing manifolds.

Alongside the finance sector, expect to see the pharmaceuticals flock to quantum, again driven by first mover advantage. By the time this pattern is established, QAAS will be as common as classical cloud-based computing is today, in other words quickly becoming ubiquitous in most enterprise contexts and according to KPMG, worth US$86 billion by 2040.

Read more about quantum’s pioneers, including Oxford’s Ilana Wisby here

Cloud-based battery management: a solution to complex degradation

The widespread adoption of electric vehicles is dependent on ease of use: rapid charging is a key factor. However, this comes with significant challenges for safety and can speed up Lithium-ion battery degradation, reducing performance and life. Cloud-based battery management and how it can optimise charging for individual vehicles without accelerating battery ageing provides a pathway to this objective.

As EV take-up increases, not everyone will have access to overnight charging at home. Those needing to charge on-the-go using public charging infrastructure will want this process to happen as quickly as possible.

To maintain convenience and align with our expectations learned from internal combustion engine vehicles, a target for vehicle manufacturers is to make recharging as quick as filling with petrol.

However, Lithium-ion (Li-ion) batteries are extremely sensitive and rapid charging accelerates battery degradation in some circumstances. Moreover, individual usage patterns will result in unique battery ageing that increases the complexity of providing appropriate solutions.

Causes of degradation

Three of the key factors that contribute to degradation in Li-ion batteries are temperature, state of charge (SoC) and charge current, with each of these having a different effect on how the cell ages.

When a Li-ion cell is operating at a higher than optimal temperature the rate of solid electrolyte interphase (SEI) layer formation and growth can increase leading to heightened lithium consumption.

It can also cause a higher rate of cathode oxidisation that results in loss of positive electrode active material. Both phenomena degrade battery performance and life. At the other end of the scale, if the temperature is too low there is the potential of lithium plating.

This is the formation of lithium metal deposits on the surface of the anode, which causes rapid degradation and can result in hazardous conditions that pose a safety concern such as thermal runaway.

State of charge is the ratio of the amount of charge stored in a battery relative to the total charge that battery can store. At high SoC, the increased instability between electrode and electrolytes can cause increased chemical reactions.

This accelerates degradation of the electrolyte and both active materials. Low SoC can also cause degradation due to contraction and subsequent damage to the negative electrode and surrounding SEI layer.

High charging currents can also contribute to degradation, particularly at low temperatures. Higher charging currents cause greater average current density at the negative electrode. This lowers the negative electrode potential, which in turn increases the propensity of lithium plating.

This can cause rapid degradation and creates risk of thermal runaway. It is heavily dependent on cell temperature and the current magnitude seen by the anode during charge: the higher the charge current and lower the temperature, the bigger the risk.

If conditions are favourable for lithium to plate on the anode surface quickly, it can form dendrites, metallic microstructures that consume lithium, increase resistance and pose a safety risk.

Mitigating degradation

The considerable scope of influencing factors such as temperature, SoC and current result in a wide variation of Li-ion battery ageing characteristics. Sensitivity to these parameters is also highly design dependent, with individual cell designs behaving differently. But, by using vehicle data to determine the complex changes in cell performance and its principal causes, it is possible to mitigate degradation.

A baseline charging profile is developed for the vehicle with on-board analysis evaluating capacity and power fade state of health metrics. When there is deviation from the original beginning of life characteristics, the data is passed to the cloud when the vehicle is plugged in during rapid charging.

In the cloud, incremental capacity analysis (ICA) is used to investigate the capacity and health of the Li-ion batteries. This approach goes beyond standard state of health estimation. The cloud-based evaluation algorithm can distinguish the type of degradation.

In this case of capacity fade, this is from three main types: loss of lithium, negative electrode active material loss, or positive electrode active material loss. The combination of the capacity and resistance change symptoms can then be used to identify the root cause of the Li-ion cell ageing.

Creating EV differentiation

With this information it is possible to recalibrate the battery management system for the unique challenges of that individual vehicle. New rapid charging current limits are set and an optimised charging strategy is created.

With continuous recalibration of the battery management system on a vehicle-by-vehicle basis, maximum charging capability can be achieved while offering protection against lithium plating.

Cloud-based battery management has the potential to enable faster, safer charging without reducing battery life. Moreover, it’s agnostic, so applicable to any Li-ion cell chemistry or format. For vehicle manufacturers, this solution to battery degradation has the potential to create differentiation amongst EVs, providing a competitive advantage in terms of enhanced convenience, reduced recharging times and without compromising warranty, performance or battery life.

Consortium of Investigative Journalists lays bare SARs failings

The International Consortium of Investigative Journalists report into the global banking sector’s use of Suspicious Activity Reports or SARs as flags of convenience for laundering dirty money has laid bare the full extent of global capital movements from illicit origins to respectable destinations.

One institution alone, Deutsche Bank, filed SARs that according to the FT totalled $1.3tn of transactions, providing a sobering perspective of the size of the problem at hand. While pretty much every major banking institution has been named in the leak and will be reviewing the efficacy of their Due Diligence and KYC, it is apparent that the sector’s emphasis will now certainly shift to more conscientious use of active investigation of capital provenance and rely less on passive SARs.

The question is, how will banks make a fist of this need to more actively investigate the provenance of capital moving through their businesses and how to they claw back money that lands outside the system?

Consistent with most conventional wisdom, the starting point is aiming to prevent dark money from entering the system in the first place. What is less apparent is exactly how low the due diligence bar is as a protection against fraud. The SARs scandal has now laid this bare.

Most responses in the financial sector commence after the DD stage when deals have collapsed or turned sour applying ‘after-the-fact’ analysis of probity and in almost every case, the primary actor should never have been entertained, as in a recent property investment fraud where the most cursory of checks would have uncovered public records of bad character, or in relation to Wildcard’s COO, Jan Marsalek, about whom red flags had been raised years before his hand in one of the more flagrant financial deceits became fully apparent in June last year.

While prevention is always better than cure, if things do go wrong, investigations conducted through traditional legal protocols such as bankruptcy proceedings or forensic accounting rarely succeed, because these are precisely the toolsets the fraudster aims to outwit from the get-go.

In the case of the investment fraud above, a law firm with the full support of the courts, had made no headway on the perpetrator’s whereabouts, any known assets or sources of income after 8 months of enquiry.

In such circumstances, different and far more effective investigative protocols are required, informed by military know-how with law enforcement and cyber insight. In this particular case, within three weeks, an independent investigations firm had applied a targeted methodology and unique intelligence collection tools to develop a full profile of the fraudster, including multiple addresses, known associates and likely sources of income.

Further investigation identified a network of cash-rental properties and other laundering enterprises that gave the courts the basis for issuing warrants and seizure of property, all at a cost of 1% of the funds recovered.

If principle one is to go beyond vanilla due diligence, then principle two is to deploy investigative protocols that the fraudster will not have prepared to obviate from the outset of their criminal adventure.

Principle three is to mesh these capabilities into an investigative capability that transcends national boundaries. As the SARs scandal emphasises, it is the very movement of capital across territorial borders that enables it to evade tracing and detection, especially in what might be termed hostile jurisdictions.

In a recent investigation for a European investment bank that had lost $800m in a fraud committed by a Russian oligarch, seven years of pursuit with a legal team with recourse to several European courts had failed to make inroads into recovering the misappropriated money.

In short order measured in weeks, pan-national investigators were able to identify a network of people of interest to the courts who manifestly were living beyond their income and identify a number of key actors who held governmental positions.

This development not only gave the courts a foothold to bring individuals into the legal system for redress but also to charge a state for being complicit in the fraud. This positive outcome has transformed seven years of frustration into legal options that allow damages to be awarded and investors’ money to be recovered.

Investigation of financial crime in the globalised economy therefore must scale across jurisdictions. Criminals understand the playbook of the authorities from the application of predictable, hidebound legal processes to the limitation of many forms of enquiry to simply stop at national boundaries. In short, the investigation capabilities need more firepower than the fraudsters.