The Future of AI: Importing Skills into LLMs

Importing New Skills

Future of AI: Importing Skills into LLMs

Introduction

In the coming years, large‑language models (LLMs) will evolve from stateless text generators into fully fledged digital assistants capable of performing domain specific tasks. What makes this transformation possible is the ability to import specialised skills from external modules. Skills also called tools, plugins or functions give models access to APIs, databases and services beyond their training data. When an LLM can call a `get_weather` function or a `create_ticket` endpoint, it stops hallucinating and starts executing.

This article explores why importing skills is central to the future of AI. We’ll look at frameworks like OpenAI’s function calling and Microsoft’s Semantic Kernel, examine emerging skills marketplaces, discuss benefits for domain expertise and workflow automation, and address the supply chain risks of unvetted skills. Finally, we’ll consider how multi agent systems and dynamic skill synthesis might shape this emerging landscape.

From Text Generation to Real‑World Actions

Traditional LLMs were designed to predict the next token in a sequence. They could hold a conversation, summarize documents or draft emails but they lacked agency. Modern systems bridge that gap through function calling, which gives models access to new functionality and data they can use to follow instructions and respond to prompts. Here’s how it works:

  1. Define a function developers provide a list of functions or tools, each described by a JSON schema. A function might retrieve stock prices, book a meeting or fetch weather data.
  2. Expose the tools when making an API request to the model, the developer supplies the list of functions the model may call. The model reads these definitions and incorporates them into its reasoning.
  3. Model decides as the model generates a response, it can decide to call one of the functions if it needs data beyond its training. In response to What’s the weather in Paris?, the model might call `get_weather(location=”Paris”)`.
  4. Execute and return the application executes the tool, returns the result to the model and the model integrates it into its final answer.

This mechanism transforms a language model into an actionable agent. The ability to call functions is no longer limited to a few built in tools: new capabilities can be imported on the fly.

Importing Skills: Mechanisms and Frameworks

Semantic Kernel

Microsoft’s Semantic Kernel is an open‑source SDK designed to orchestrate LLM interactions. It treats each skill as a plugin with one or more functions. Developers can import plugins in three ways: native code, OpenAPI specifications or MCP Servers. Native plugins are written in the host language and annotated so the kernel recognizes callable functions. OpenAPI plugins generate functions from API definitions, making cross‑team sharing easier. MCP Servers expose plugins as network services, allowing multiple applications to consume them. Semantic Kernel recommends starting with native plugins and moving to OpenAPI or MCP as systems scale.

Inside a plugin, functions fall into two categories:

  • Retrieval functions fetch information for retrieval‑augmented generation (RAG). They might query a database, run a search or retrieve documents.
  • Task automation functions perform actions, such as sending emails or updating a CRM. These often require human in the loop review.

References

  1. Agent Skills for Large Language Models (arXiv, 2024) – describes skills as modular packages of instructions, code and resources that can be loaded on demand to extend an LLM without retraining, and notes that many community‑provided skills contain vulnerabilities.
  2. Multi‑Stage Training & Parameter‑Efficient Fine‑Tuning for Large Language Models (MDPI survey) – summarises methods such as multi‑stage training pipelines, parameter‑efficient fine‑tuning, retrieval‑augmented generation and mixture‑of‑experts models to extend LLM capabilities.
  3. AI Updates – 13 February 2026 (Medium) – reports on the shift towards persistent agents and skills as the ‘middle layer’ of AI progress.
  4. LinkedIn post on importing skills into LLMs – cautions users to review a skill’s SKILL.md file and perform static analysis to detect hidden instructions or broad permissions and to be cautious of prompt‑injection vulnerabilities.

Leave a Reply

Your email address will not be published. Required fields are marked *