THOUGHT LEADERSHIP

Tech companies shouldn’t wait for policymakers to regulate AI

Oct 29, 2025

Tech companies shouldn’t wait for policymakers to regulate AI

Global governments such as the UK are dithering in enforcing AI regulation, caving to the allure of Silicon Valley. Tech companies and users should proactively take it upon themselves to self-regulate and prioritise principles of transparency, accountability and data sovereignty.

Governments are hesitant to stifle innovation. Stunned by AI’s transformative potential in their country, world leaders don’t want to miss out on the race. Regulation is lagging behind, with only a handful of nations having passed laws to address the public concerns about AI.

Ahead of Donald Trump’s visit to London last month, the UK delayed its own AI Bill until at least summer 2026. The UK-US Tech Prosperity Deal was still being negotiated, and ministers were wary of making the UK unattractive to Silicon Valley tech giants. The biggest investment package of its kind in British history is on the table, with 250 billion pounds set to flow both ways across the Atlantic.

In the Labour Party’s manifesto before last year’s general election, they pledged: “Labour will ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes.”

Parliament has cracked down on sexually explicit deepfakes, ruling that predators who create them could face prosecution as the Keir Starmer bears down on vile online abuse. But the other half of the promise is still in limbo, with powerful AI companies so far getting off easy. Instead of prioritising safety and copyright issues, the government is dithering in favour of the likes of Nvidia, Microsoft, OpenAI and the pockets of BlackRock.

[ 👋 Hi there! If you’re here to find out more about FLock.io, follow us on X, read our docs and sign up to AI Arena on train.flock.io]

Now, the UK is falling behind its European neighbours. The sheer velocity of AI development is outrunning policymakers, and legislation may even be outdated when it finally does take effect. Italy passed a set of laws in September, 2025, limiting child access to AI and imposing prison terms for damaging use such as generating deepfakes. The EU AI Act entered into force last year, marking the world’s first comprehensive legislation. Across the Atlantic Ocean, on the other hand, regulation is fragmented, and prospects for broad Congress-passed legislation remain doubtful.

Just last Thursday (October 23), a “bold” step was taken in the right direction: a new AI Sovereignty deal was struck between the UK Ministry of Justice and OpenAI where the tech giant will enable its business customers to store their data on British soil for the first time. British businesses will be able to host data on more secure, sovereign servers to reinforce national resilience in the face of growing global cyber threats.

But bolder steps are possible. In the absence of legislation, AI companies must take matters into their own hands. The industry should take the lead on self-governance and ethical guardrails to build public trust and pre-empt potentially heavy-handed or poorly informed regulation.

Centralised AI systems may offer scale, but they often fall short on trust and adaptability. That’s why decentralised AI, through approaches like blockchain-powered federated learning, is so vital. It lets countries and organisations work together, share insights, while keeping their data secured locally without risking putting all the control in one place.

Tech companies must commit to greater model transparency, accountability and data sovereignty. Blockchain is the secure and dependable way to achieve all of the above.

Transparency

The black box nature of powerful AI models is a major driver of public distrust. True self-governance requires tech companies to move beyond vague ethical statements and commit to radical model transparency. This means not just releasing a policy document, but openly documenting training datasets and error rates.

Being explicit about the datasets used to train models, including their origin, potential biases, and limitations. If a model’s training data disproportionately represents one demographic, the company must proactively disclose this, acknowledging the predictable bias in its outputs.

Publishing accessible documentation on what a model can and cannot do, including its error rates across different demographics, is fundamental to building trust. 

Accountability 

When an AI system causes harm – be it a biased loan algorithm or an inaccurate medical diagnosis – the public needs a clear chain of responsibility, not a corporate shrug. Accountability means building systems that are designed to be auditable by an independent third party.

Data sovereignty for countries and users

Perhaps the most critical principle of data sovereignty. As a bare minimum, this is the requirement that data is subject to the laws and governance of the country where it is collected or stored. The recent AI Sovereignty deal between the UK Ministry of Justice and OpenAI is a tentative step in the right direction, but it isn’t bold enough. It reinforces the centralised risk by consolidating data with a single, foreign-controlled tech giant.

We must take it a step further and pursue not just “AI sovereignty” but “user sovereignty”, giving individuals and data contributors more rights.

This fundamental power imbalance can only be rectified by building an entirely new infrastructure: Decentralised AI (DeAI). This is where the combination of Federated Learning (FL) and Blockchain Technology becomes essential. FL solves the data movement problem by keeping data localised, and the blockchain solves the transparency and control problem by removing the centralised coordinating entity.

Read our recent blog to find out why blockchain isn’t a cliché at FLock.io, it’s essential for scaling federated learning.

More about FLock.io

FLock.io’s ecosystem consists of three key components: AI Arena, a platform for competitive model training; FL Alliance, a privacy-focused collaboration framework that enhances models while preserving data sovereignty; and Moonbase, our new rewards layer and AI model marketplace. We also just launched our API Platform.

Find out more about FLock.io by reading our docs and blogs. See a sneak peak of new launches coming in Q4 2025.

For future updates, follow FLock.io on X.

Weekly newsletter

No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every week.

Read about our privacy policy