Imagegen
    Imagegen

    Imagegen

    MCP server for OpenAI Image Generation & Editing — text-to-image, image-to-image (with mask), no extra plugins.

    4.3

    GitHub Stats

    Stars

    19

    Forks

    3

    Release Date

    6/16/2025

    about three weeks ago

    Detailed Description

    mcp openai image generation server

    npm version

    this project provides a server implementation based on the model context protocol (mcp) that acts as a wrapper around openai's image generation and editing apis (see openai documentation).

    features

    • exposes openai image generation capabilities through mcp tools.
    • supports text-to-image generation using models like dall-e 2, dall-e 3, and gpt-image-1 (if available/enabled).
    • supports image-to-image editing using dall-e 2 and gpt-image-1 (if available/enabled).
    • configurable via environment variables and command-line arguments.
    • handles various parameters like size, quality, style, format, etc.
    • saves generated/edited images to temporary files and returns the path along with the base64 data.

    here's an example of generating an image directly in cursor using the text-to-image tool integrated via mcp:

    quick run with npx

    you can run the server directly from npm using npx (requires node.js and npm):

    npx imagegen-mcp [options]
    

    see the running the server section for more details on options and running locally.

    prerequisites

    • node.js (v18 or later recommended)
    • npm or yarn
    • an openai api key

    integration with cursor

    you can easily integrate this server with cursor to use its image generation capabilities directly within the editor:

    1. open cursor settings:

      • go to file > preferences > cursor settings (or use the shortcut ctrl+, / cmd+,).
    2. navigate to mcp settings:

      • search for "mcp" in the settings search bar.
      • find the "model context protocol: custom servers" setting.
    3. add custom server:

      • click on "edit in settings.json".
      • add a new entry to the mcpservers array. it should look something like this:
      "mcpservers": [
          "image-generator-gpt-image": {
              "command": "npx imagegen-mcp --models gpt-image-1",
              "env": {
                  "openai_api_key": "xxx"
              }
          }
        // ... any other custom servers ...
      ]
      
      • customize the command:
        • you can change the --models argument in the command field to specify which models you want cursor to have access to (e.g., --models dall-e-3 or --models gpt-image-1). make sure your openai api key has access to the selected models.
    4. save settings:

      • save the settings.json file.

    cursor should now recognize the "openai image gen" server, and its tools (text-to-image, image-to-image) will be available in the mcp tool selection list (e.g., when using @ mention in chat or code actions).

    setup

    1. clone the repository:

      git clone <your-repository-url>
      cd <repository-directory>
      
    2. install dependencies:

      npm install
      # or
      yarn install
      
    3. configure environment variables: create a .env file in the project root by copying the example:

      cp .env.example .env
      

      edit the .env file and add your openai api key:

      openai_api_key=your_openai_api_key_here
      

    building

    to build the typescript code into javascript:

    npm run build
    # or
    yarn build
    

    this will compile the code into the dist directory.

    running the server

    this section provides details on running the server locally after cloning and setup. for a quick start without cloning, see the quick run with npx section.

    using ts-node (for development):

    npx ts-node src/index.ts [options]
    

    using the compiled code:

    node dist/index.js [options]
    

    options:

    • --models <model1> <model2> ...: specify which openai models the server should allow. if not provided, it defaults to allowing all models defined in src/libs/openaiimageclient.ts (currently gpt-image-1, dall-e-2, dall-e-3).
      • example using npx (also works for local runs): ... --models gpt-image-1 dall-e-3
      • example after cloning: node dist/index.js --models dall-e-3 dall-e-2

    the server will start and listen for mcp requests via standard input/output (using stdioservertransport).

    mcp tools

    the server exposes the following mcp tools:

    text-to-image

    generates an image based on a text prompt.

    parameters:

    • text (string, required): the prompt to generate an image from.
    • model (enum, optional): the model to use (e.g., gpt-image-1, dall-e-2, dall-e-3). defaults to the first allowed model.
    • size (enum, optional): size of the generated image (e.g., 1024x1024, 1792x1024). defaults to 1024x1024. check openai documentation for model-specific size support.
    • style (enum, optional): style of the image (vivid or natural). only applicable to dall-e-3. defaults to vivid.
    • output_format (enum, optional): format (png, jpeg, webp). defaults to png.
    • output_compression (number, optional): compression level (0-100). defaults to 100.
    • moderation (enum, optional): moderation level (low, auto). defaults to low.
    • background (enum, optional): background (transparent, opaque, auto). defaults to auto. transparent requires output_format to be png or webp.
    • quality (enum, optional): quality (standard, hd, auto, ...). defaults to auto. hd only applicable to dall-e-3.
    • n (number, optional): number of images to generate. defaults to 1. note: dall-e-3 only supports n=1.

    returns:

    • content: an array containing:
      • a text object containing the path to the saved temporary image file (e.g., /tmp/uuid.png).

    image-to-image

    edits an existing image based on a text prompt and optional mask.

    parameters:

    • images (string, required): an array of file paths to local images.
    • prompt (string, required): a text description of the desired edits.
    • mask (string, optional): a file path of mask image (png). transparent areas indicate where the image should be edited.
    • model (enum, optional): the model to use. only gpt-image-1 and dall-e-2 are supported for editing. defaults to the first allowed model.
    • size (enum, optional): size of the generated image (e.g., 1024x1024). defaults to 1024x1024. dall-e-2 only supports 256x256, 512x512, 1024x1024.
    • output_format (enum, optional): format (png, jpeg, webp). defaults to png.
    • output_compression (number, optional): compression level (0-100). defaults to 100.
    • quality (enum, optional): quality (standard, hd, auto, ...). defaults to auto.
    • n (number, optional): number of images to generate. defaults to 1.

    returns:

    • content: an array containing:
      • a text object containing the path to the saved temporary image file (e.g., /tmp/uuid.png).

    development

    • linting: npm run lint or yarn lint
    • formatting: npm run format or yarn format (if configured in package.json)

    contributing

    pull requests (prs) are welcome! please feel free to submit improvements or bug fixes.

    Star History

    Star History

    May 1May 3May 8May 13May 31Jun 16Jul 705101520
    Powered by MSeeP Analytics

    About the Project

    This app has not been claimed by its owner yet.

    Claim Ownership

    Receive Updates

    Security Updates

    Get notified about trust rating changes

    to receive email notifications.