Getting started with AI-assisted development in the Eclipse Foundation Software Development team

A lot has been written about AI in software development. Much of it focuses on what the technology can do, or what teams have already built with it.

What is discussed less often is how teams responsible for widely used systems can introduce these tools carefully. This post looks at how our team is approaching AI-assisted development, and what we want to get right before we move further.

At the Eclipse Foundation, we maintain infrastructure used by a large and distributed open source community. The Eclipse ecosystem includes more than 400 open source projects and over 15,000 contributors worldwide.

Our team builds and maintains several of the applications that support that ecosystem, including the Open VSX Registry, the Eclipse Marketplace, contributor agreement tooling, and services used by many active open source projects.

Our team is small relative to the scope of what we support, and the systems we build must remain reliable, secure, and maintainable over the long term.

We are beginning to introduce AI-assisted development practices across the team, starting with a small set of controlled experiments. Here is how we are approaching it.

Starting with the right question

The question we kept coming back to was not "how do we use AI?" but "how do we use AI responsibly, given the nature of what we build?"

Mistakes in our systems do not just affect our organisation. They can affect many projects and developers who rely on the services we provide. That kind of reach means we need to be particularly deliberate when introducing new development practices.

That context shapes how we approach this work. Before discussing tools or workflows, we spent time defining the guardrails that will guide how we begin.

Isolated environments for agentic workflows

Part of our exploration includes experimenting with agentic workflows — systems where AI can generate code, execute commands, and interact with development tools.

That naturally raises a practical question: where should those agents run?

Our starting principle is that AI agents should operate in isolated environments. In practice, this means containerised sandboxes.

Projects and platforms like Docker AI Sandboxes, nono.sh, Daytona, and Modal are beginning to formalise this pattern. They provide controlled environments where AI-generated code can run and experiment without access to production environments.

The reasoning is straightforward. Agents capable of executing commands or interacting with systems need clear boundaries. Not because the tools are uniquely unsafe, but because containment is standard engineering discipline for any automated system that can execute commands. Any automated system introduced into a workflow should begin with limited access and well-defined boundaries.

Running agents inside isolated environments such as Docker AI Sandboxes allows them to write code, run tests, and experiment in a reproducible environment without direct access to sensitive infrastructure.

As part of this approach, agents will not have access to production credentials or other sensitive information, and they will not run inside our internal networks. If something behaves unexpectedly, the impact remains limited and recoverable.

This is not a new mindset for us. The same discipline we apply to dependency management, deployment pipelines, and access control applies here as well. AI tooling does not get a special exception simply because it is new.

Where AI can help first

Our goal is not to automate judgement. It is to reduce friction in work that is largely mechanical, repetitive, or easy to postpone.

The clearest opportunities we see today include:

  • Rapid prototyping and technical discovery: Using AI for "architectural spikes" — building quick prototypes to validate a concept or explore a new technology. This helps us understand the "shape" of a solution and identify technical blockers early, so that when we move to production we do so with a clearer, research-backed roadmap.

  • Test generation for well-defined functions: Writing unit tests for stable, well-scoped code is repetitive work that often falls behind. AI-assisted generation can help accelerate this when done in a controlled environment.

  • Documentation drafts: Keeping documentation up to date is an ongoing challenge for a small team. Generating a first draft from code or issue descriptions, followed by human review and editing, fits naturally into our workflow.

  • Scaffolding and boilerplate: Creating the initial structure for new services, migration scripts, or API endpoints often involves repetitive setup work. Reducing that friction can make development faster without sacrificing quality.

  • Technical debt and modernisation work: Like many small teams, we still run legacy applications and services that need attention but are easy to postpone when day-to-day operational work takes priority. AI-assisted development may help us make more consistent progress on refactoring, code cleanup, migrations, and other modernisation work that too often gets pushed aside.

  • Website maintenance, redesigns, and framework migrations: Our team also maintains websites such as eclipse.org and many working group sites. Work such as template updates, redesigns, framework migrations, accessibility improvements, and content restructuring often involves repetitive implementation work that could benefit from AI-assisted workflows.

In all cases, AI-generated output must still go through the same review and validation processes we apply to any other code change. Developers remain responsible for understanding the problem being solved, reviewing the generated code, and ensuring that any changes meet our security and reliability standards.

What we expect to learn

We are approaching this work with genuine uncertainty. Some of the automation we are exploring may prove more useful than expected. Other ideas will likely reveal friction or limitations we have not yet anticipated.

What matters most is the approach: start contained, observe carefully, and expand where the benefits are clear. The goal is not to adopt AI quickly. It is to adopt it thoughtfully.

More broadly, the role of the developer is beginning to evolve. Over time, we may spend less effort writing every line of code by hand and more time reviewing, validating, testing, approving, and iterating on generated output to improve the systems we operate.

For teams maintaining shared infrastructure, that shift does not make engineering judgement less important. If anything, it makes it more important — which is exactly why we want to be deliberate about how we begin.

Christopher Guindon

Christopher Guindon

Director, Software Development at The Eclipse Foundation

Recent Posts