Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks
Open-source AI: benefits, risks, and how to assess projects responsibly
Photo by Markus Winkler / Unsplash

Open-source AI: benefits, risks, and how to assess projects responsibly

As artificial intelligence systems become embedded in everyday tools, attention is turning to models built in the open. Supporters say transparency and shared development can restore trust and competition. Critics warn that openness brings new security and misuse risks.

Mr Moonlight profile image
by Mr Moonlight

Open-source artificial intelligence has moved rapidly from developer platforms like GitHub to become a mainstream choice for governments, businesses and developers. Systems built with openly published code or model weights now sit behind chatbots, document analysis tools, scientific research platforms and internal corporate systems.

For many users, the attraction is pragmatic rather than ideological. Open models promise lower costs, more control and the ability to understand what a system is doing, rather than treating it as an inscrutable black box.

That promise is driving adoption across the public and private sectors. Public bodies are exploring open models to avoid long-term dependence on a small number of vendors. Companies are using them to customise tools for specific domains, from law to medicine, without sending sensitive data to third-party services.

Researchers see openness as a way to reproduce results and check claims about performance and safety. At the same time, the spread of powerful models outside tightly controlled commercial environments has sharpened fears about misuse, security flaws and accountability.

What “open” really means in artificial intelligence

The debate often turns on a deceptively simple word. “Open” in artificial intelligence does not have a single meaning, and misunderstanding the term can lead to misplaced trust or unnecessary alarm. In traditional software, open source refers to code released under licences that guarantee the freedom to inspect, modify and redistribute it. These licences, stewarded by organisations such as the Open Source Initiative, are designed to ensure that openness is legally meaningful rather than a marketing label.

In AI, openness can apply to several layers. Some projects release the source code used to train and run models, allowing others to study and improve it. Others publish the trained model weights, the numerical parameters learned during training, which makes it possible to run or fine-tune a model without repeating the original training process. A third category offers open access, meaning anyone can use a system through an interface, while the underlying code and weights remain closed. These distinctions shape who can audit a system, how easily it can be repurposed and where responsibility lies if harm occurs.

The case for openness: transparency, innovation and control

Advocates argue that open approaches address some of the central problems now facing AI adoption. Transparency is one. When code or weights are available, independent experts can test whether a system behaves as claimed, probe it for bias or vulnerabilities, and reproduce results. This does not guarantee safety or fairness, but it makes meaningful scrutiny possible. In fields such as cryptography and operating systems, openness has long been associated with stronger security because weaknesses can be identified by many eyes rather than hidden behind contracts.

Innovation is another driver. Training large models from scratch is expensive, but building on existing work is far more accessible. Open models allow startups, universities and public-interest groups to experiment and adapt systems for minority languages, local contexts or specialised tasks that would not attract commercial investment. This has helped prevent the AI landscape from being entirely dominated by a handful of large technology companies, even as those companies continue to play a central role.

There is also a strategic dimension. Organisations deploying AI in critical functions are increasingly concerned about lock-in. Open models can be hosted on local infrastructure, configured to meet specific regulatory requirements and maintained independently of a single supplier’s business decisions. For governments and regulated industries, that degree of control is often as important as raw performance. It explains why open development is supported by bodies such as the Linux Foundation and reflected in international guidance from the OECD and the National Institute of Standards and Technology.

Misuse, security and accountability risks

The risks associated with openness are not hypothetical. Publishing powerful models lowers barriers not only for benign experimentation but also for malicious use. Tools that can generate convincing text, images or code can be repurposed for fraud, disinformation or cyber attacks. While closed systems are also abused, open models make it easier for bad actors to adapt systems in ways that bypass safeguards built into commercial services.

Security concerns extend beyond misuse. Open projects rely on complex software supply chains, with many dependencies maintained by small teams or volunteers. A vulnerability or malicious change in one component can propagate widely, as past attacks on open-source infrastructure have shown. In AI, the attack surface is broader. Training data can be poisoned to introduce hidden behaviours, and model weights can be altered in subtle ways that are hard to detect without careful analysis.

Accountability is another challenge. When a model is developed by one group, modified by another and deployed by a third, it can be unclear who bears responsibility for harms. Victims may find it difficult to seek redress, particularly when actors are spread across jurisdictions. Openness does not remove the need for governance and legal clarity, but it can complicate them.

How to assess the quality of open AI projects

For organisations considering open models, the key question is how to judge whether a particular project is trustworthy and suitable for a given use. That assessment begins with licensing. Some projects use well-established open-source licences that clearly define rights and obligations. Others adopt bespoke licences that restrict commercial use or sensitive applications. These may reflect genuine safety concerns, but they can also introduce legal uncertainty and reduce interoperability. It is essential to understand whether a licence covers both code and weights, and what it permits in practice.

Documentation offers further clues. Mature projects typically provide detailed descriptions of how models were trained, what data sources were used, and what limitations are known. In recent years, the practice of publishing model cards has emerged as a way to summarise intended uses and risks. While such documents are not a substitute for independent evaluation, their absence should prompt scepticism.

Maintenance and governance are equally important. An open repository that has not been updated for a year, or where issues go unanswered, may pose long-term risks. Healthy projects usually have named maintainers, clear contribution guidelines and some form of roadmap. Governance structures that spread control across individuals or organisations reduce the risk that a project will be abandoned or steered in a narrow direction without warning.

Security practices deserve particular attention. Responsible projects publish policies explaining how vulnerabilities can be reported and addressed. Some commission external audits or invite researchers to test models for weaknesses. In an environment where AI systems are increasingly used in sensitive contexts, the absence of any visible security posture is a significant warning sign.

Community engagement can also be revealing. A project supported by a diverse set of contributors, including academics, companies and civil-society groups, is more likely to benefit from varied perspectives and early identification of problems. By contrast, projects dominated by a single organisation may be open in name but closed in practice.

Deploying open models safely in real-world systems

Even when a project appears robust, deployment choices matter. Many harms arise not from the existence of a model but from how it is used. Introducing open models into real-world systems calls for careful scoping and monitoring. Running pilots in controlled environments allows teams to observe unexpected behaviours before they affect users. Combining models with human oversight, especially in high-stakes decisions, reduces the risk of automated errors causing real harm.

Organisations also need to plan for change. Open projects evolve quickly, and upstream updates can introduce both improvements and new risks. Keeping track of changes, applying patches and having a strategy for replacing a model if maintenance stops are all part of responsible use. None of these tasks disappears simply because a model is open.

Regulation and the future of open AI

Policy debates often frame openness as an alternative to regulation, but in practice the two are intertwined. European and international policymakers are grappling with how to encourage open research without discouraging innovation or enabling harm. The European Union’s AI Act, for example, acknowledges the value of open-source development while focusing regulatory obligations on high-risk applications rather than development models alone. International guidance tends to emphasise risk-based approaches, recognising that the same technology can be benign in one context and dangerous in another.

What emerges from this landscape is not a simple verdict on open-source AI, but a set of trade-offs. Openness can strengthen scrutiny, resilience and competition, but only when paired with active governance and responsible deployment. Closed systems can limit some forms of misuse, but at the cost of transparency and control. For users and organisations navigating these choices, the most important skill may be discernment rather than allegiance to any single model of development.

Open-source AI is best understood as a powerful toolset rather than a moral position. Its impact depends on how carefully projects are built, how honestly their limits are communicated and how thoughtfully they are deployed.

Mr Moonlight profile image
by Mr Moonlight

Read More