The package manager for agent skills and context
Versioned, evaluated skills and context for agentic software development.
Get started for free
Tessl helps you find, install, version, and evaluate the skills and context your coding agents rely on, so they behave consistently across tools and projects.
Used by Engineers at






























Context can drive up to 3.3X improvement in agents use of over 300 libraries
How good is your skill? Put it to the test
Evaluate any skill against structured best practices for descriptions and content.
EVALUATE YOUR SKILLMake agents successful in your environment
Agents don’t know how to develop in your organization, they need to be onboarded. Turn your APIs, libraries, and conventions into agent-usable skills, docs and rules, so agents stop guessing and start behaving like experienced team members.
- Version-matched OSS and internal APIs
- Correct imports, calls, and constraints
- Fewer retries and review cycles
Evaluate what works in real-world scenarios
Not all context is equal, and mistakes can mislead agents or overwhelm context windows. Evaluate and optimize your skills by running agents through real world scenarios, and testing changes to avoid regressions over time.
- Repeatable task evaluations
- Regression detection as skills, agents and models evolve
- Learn if your context helps or hurts agent performance
Create skills once. Use them across all agents and models
Tessl gives you a single source of truth for skills and context, reusable across agents, models, and development environments without duplication or drift.
- Avoid lock-in with universally compatible context
- Consistent behavior across agents
- Collaborate on context with your team and agents

Keep your agent on the rails with better context. Discover thousands of evaluated skills in the Tessl Registry.
Don’t take our word for it
Featured Articles
Explore our guides and resources to understand key concepts relating to Al agents.

Double your coding agent’s chances of writing secure code with the CodeGuard Skill
Enhance AI coding agents with the CodeGuard Skill to improve secure code generation by applying Cisco's security rules, covering 23 categories and multiple languages.
Read more

How to Evaluate AI Agents: An Introduction to Harbor
Harbor introduces a new approach to evaluate AI agents, focusing on statistical evaluation over traditional testing to address non-deterministic behavior in AI systems.
Read more

Announcing AI Native Dev Con, Supercharge development today, and reimagine it for tomorrow
We’re excited to announce the launch of a brand new conference, AI Native Dev Con. We’re kicking off with an inaugural virtual conference on the 21st November, 2024. The conference aims to help you use AI to develop faster and better today, and exploring how AI is reshaping the way we will build, maintain and evolve software tomorrow. We highlight exciting new tools and advancements in AI-powered software development, with a focus on how large language models are reshaping how we build, maintain, and scale complex codebases. Join us to explore the future where AI goes beyond generating code snippets to orchestrating the creation and evolution of entire software systems.
Read more

Terminal-Bench: Benchmarking AI Agents on CLI Tasks
Terminal-Bench is a new benchmark testing how well AI agents handle real-world terminal tasks, revealing big performance gaps and sparking a wave of innovation in system-level agent design.
Read more

Making React apps multilingual without rewriting existing components
Translate React apps at build-time with zero code refactoring using Lingo.dev’s AI-powered compiler – multilingual UIs made effortless for developers.
Read more
Get started in seconds
Or explore skills and context in the Registry.









