Skip to content

Releases: vercel/modelfusion

v0.105.0

26 Dec 18:08
Compare
Choose a tag to compare

Added

  • Tool call support for chat prompts. Assistant messages can contain tool calls, and tool messages can contain tool call results. Tool calls can be used to implement e.g. agents:

    const chat: ChatPrompt = {
      system: "You are ...",
      messages: [ChatMessage.user({ text: instruction })],
    };
    
    while (true) {
      const { text, toolResults } = await useToolsOrGenerateText(
        openai
          .ChatTextGenerator({ model: "gpt-4-1106-preview" })
          .withChatPrompt(),
        tools, // array of tools
        chat
      );
    
      // add the assistant and tool messages to the chat:
      chat.messages.push(
        ChatMessage.assistant({ text, toolResults }),
        ChatMessage.tool({ toolResults })
      );
    
      if (toolResults == null) {
        return; // no more actions, break loop
      }
    
      // ... (handle tool results)
    }
  • streamText returns a text promise when invoked with fullResponse: true. After the streaming has finished, the promise resolves with the full text.

    const { text, textStream } = await streamText(
      openai.ChatTextGenerator({ model: "gpt-3.5-turbo" }).withTextPrompt(),
      "Write a short story about a robot learning to love:",
      { fullResponse: true }
    );
    
    // ... (handle streaming)
    
    console.log(await text); // full text

v0.104.0

24 Dec 10:02
Compare
Choose a tag to compare

Changed

  • breaking change: Unified text and multimodal prompt templates. [Text/MultiModal]InstructionPrompt is now InstructionPrompt, and [Text/MultiModalChatPrompt] is now ChatPrompt.
  • More flexible chat prompts: The chat prompt validation is now chat template specific and validated at runtime. E.g. the Llama2 prompt template only supports turns of user and assistant messages, whereas other formats are more flexible.

v0.103.0

23 Dec 10:34
Compare
Choose a tag to compare

Added

  • finishReason support for generateText.

    The finish reason can be stop (the model stopped because it generated a stop sequence), length (the model stopped because it generated the maximum number of tokens), content-filter (the model stopped because the content filter detected a violation), tool-calls (the model stopped because it triggered a tool call), error (the model stopped because of an error), other (the model stopped for another reason), or unknown (the model stop reason is not know or the model does not support finish reasons).

    You can extract it from the full response when using fullResponse: true:

    const { text, finishReason } = await generateText(
      openai
        .ChatTextGenerator({ model: "gpt-3.5-turbo", maxGenerationTokens: 200 })
        .withTextPrompt(),
      "Write a short story about a robot learning to love:",
      { fullResponse: true }
    );

v0.102.0

22 Dec 13:01
Compare
Choose a tag to compare

Added

  • You can specify numberOfGenerations on image generation models and create multiple images by using the fullResponse: true option. Example:

    // generate 2 images:
    const { images } = await generateImage(
      openai.ImageGenerator({
        model: "dall-e-3",
        numberOfGenerations: 2,
        size: "1024x1024",
      }),
      "the wicked witch of the west in the style of early 19th century painting",
      { fullResponse: true }
    );
  • breaking change: Image generation models use a generalized numberOfGenerations parameter (instead of model specific parameters) to specify the number of generations.

v0.101.0

22 Dec 08:57
Compare
Choose a tag to compare

Changed

  • Automatic1111 Stable Diffusion Web UI configuration has separate configuration of host, port, and path.

Fixed

  • Automatic1111 Stable Diffusion Web UI uses negative prompt and seed.

v0.100.0

17 Dec 19:47
Compare
Choose a tag to compare

v0.100.0 - 2023-12-17

Added

  • ollama.ChatTextGenerator model that calls the Ollama chat API.
  • Ollama chat messages and prompts are exposed through ollama.ChatMessage and ollama.ChatPrompt
  • OpenAI chat messages and prompts are exposed through openai.ChatMessage and openai.ChatPrompt
  • Mistral chat messages and prompts are exposed through mistral.ChatMessage and mistral.ChatPrompt

Changed

  • breaking change: renamed ollama.TextGenerator to ollama.CompletionTextGenerator
  • breaking change: renamed mistral.TextGenerator to mistral.ChatTextGenerator

v0.99.0

16 Dec 19:52
Compare
Choose a tag to compare

Added

  • You can now specify numberOfGenerations on text generation models and access multiple generations by using the fullResponse: true option. Example:

    // generate 2 texts:
    const { texts } = await generateText(
      openai.CompletionTextGenerator({
        model: "gpt-3.5-turbo-instruct",
        numberOfGenerations: 2,
        maxGenerationTokens: 1000,
      }),
      "Write a short story about a robot learning to love:\n\n",
      { fullResponse: true }
    );
  • breaking change: Text generation models now use a generalized numberOfGenerations parameter (instead of model specific parameters) to specify the number of generations.

Changed

  • breaking change: Renamed maxCompletionTokens text generation model setting to maxGenerationTokens.

v0.98.0

16 Dec 11:08
Compare
Choose a tag to compare

Changed

  • breaking change: responseType option was changed into fullResponse option and now uses a boolean value to make discovery easy. The response values from the full response have been renamed for clarity. For base64 image generation, you can use the imageBase64 value from the full response:

    const { imageBase64 } = await generateImage(model, prompt, {
      fullResponse: true,
    });

Improved

  • Better docs for the OpenAI chat settings. Thanks @bearjaws for the contribution!

Fixed

  • Streaming OpenAI chat text generation when setting n:2 or higher now returns only the stream from the first choice.

v0.97.0

14 Dec 18:40
Compare
Choose a tag to compare

Added

  • breaking change: Ollama image (vision) support. This changes the Ollama prompt format. You can add .withTextPrompt() to existing Ollama text generators to get a text prompt like before.

    Vision example:

    import { ollama, streamText } from "modelfusion";
    
    const textStream = await streamText(
      ollama.TextGenerator({
        model: "bakllava",
        maxCompletionTokens: 1024,
        temperature: 0,
      }),
      {
        prompt: "Describe the image in detail",
        images: [image], // base-64 encoded png or jpeg
      }
    );

Changed

  • breaking change: Switch Ollama settings to camelCase to align with the rest of the library.

v0.96.0

14 Dec 07:58
Compare
Choose a tag to compare