Changelog of @hackage/openai 2.5.3

2.5.3:

  • Fix streaming Responses and chat-completions requests so caller-injected headers added via ClientEnv.middleware are preserved on the wire.

2.5.2:

  • Add ResponseWithMetadata wrapper and lookupHeader helper for accessing raw HTTP response headers from API calls.
  • Add createResponseWithMetadata for non-streaming responses with headers.
  • Add createResponseStreamWithMetadata and createResponseStreamTypedWithMetadata for streaming responses with headers (delivered via callback before events flow).
  • Existing createResponse, createResponseStream, and createResponseStreamTyped remain unchanged.

2.5.1:

  • Add 28 missing ResponseStreamEvent constructors covering all event types from the OpenAPI spec (lifecycle, reasoning, refusal, audio, image generation, MCP, and custom tool call events).
  • Add UnknownEvent fallback constructor that gracefully handles unrecognized event types instead of crashing the parser.

2.5.0:

  • BREAKING (streaming imports): move chat-completion streaming payload types (ChatCompletionChunk, ChunkChoice, Delta, ChatCompletionStreamEvent) from OpenAI.V1.Chat.Completions to OpenAI.V1.Chat.Completions.Stream to avoid selector ambiguity on choices/usage in non-streaming imports.

2.4.0:

  • Add Structured Outputs support to the Responses API (TextConfig, TextFormat, TextFormat_JSON_Schema).
  • Extend Responses supplementary request/response fields, including conversation, prompt, tool_resources, truncation, max_output_tokens, max_tool_calls, top_logprobs, safety_identifier, prompt_cache_key, and user.
  • Add verbosity support on Responses text config via Verbosity and text.verbosity.
  • Add ReasoningEffort_None for Responses reasoning effort.

2.3.0:

  • Add ChatCompletionStreamOptions support for including usage in chat completion streams.
  • Simplify Responses tool JSON encoding/decoding.
  • Add support for /v1/chatkit/*.
  • Add missing arguments in makeMethods in the create chat example.

2.2.1:

2.2.0:

  • Add structured reasoning support to OpenAI.V1.Responses, including the Reasoning, ReasoningEffort, and ReasoningSummary types, plus the ServiceTier alias.
  • Extend CreateResponse and ResponseObject with reasoning and service_tier fields to round-trip reasoning configuration.

2.1.0:

  • Add Item_Input_Reasoning constructor and JSON round-trip test so reasoning traces from tool calls can be echoed back via the Responses API.

2.0.0:

  • BREAKING CHANGE: Renamed Item_InputMessage to Item_Input_Message in OpenAI.V1.Responses for consistency with new constructors

    The InputItem data type in the Responses API has been updated to use a consistent naming scheme. If you are using the Responses API, you will need to update your code:

    -- Before (v1.2.0):
    Item_InputMessage { role = ..., content = ..., status = ... }
    
    -- After (v2.0.0):
    Item_Input_Message { role = ..., content = ..., status = ... }
    
  • Add support for function tool calls in Responses API

    • New InputItem constructors: Item_Input_Function_Call, Item_Input_Function_Call_Output, Item_Input_Item_Reference
    • Add flattened tool JSON representation for Responses API compatibility
    • Export status constants: statusIncomplete, statusCompleted
    • Export tool choice constants: toolChoiceNoneText, toolChoiceAutoText, toolChoiceRequiredText
  • Code quality improvements:

    • Optimize isFunctionField using HashSet for O(1) lookups
    • Simplify unflattenChoice with guards
    • Extract magic strings as named constants

1.2.0:

1.1.1:

1.1.0:

1.0.1:

  • Include README
  • Include usage example

1.0.0:

  • Initial release