Featured image of post [AI Cog] Want to Run an AI Business but Lack a GPU? Cant Set Up the Environment? Use Cog to Easily Deploy Your Business to the Cloud

[AI Cog] Want to Run an AI Business but Lack a GPU? Cant Set Up the Environment? Use Cog to Easily Deploy Your Business to the Cloud

Explore how to deploy your AI business to the cloud using Cog, achieving serverless deployment without a GPU.

When you want to start an AI business but lack a GPU, what should you do?

Consider using Cog to deploy AI services to the cloud, serverless.

Let’s see how to use Cog to get it on the cloud.

Find a development server.

Cog

Installation

1
2
sudo curl -o /usr/local/bin/cog -L https://github.com/replicate/cog/releases/latest/download/cog_`uname -s`_`uname -m`
sudo chmod +x /usr/local/bin/cog

Verification

This step is optional, mainly to verify if your environment is okay.

1
sudo cog predict r8.im/stability-ai/stable-diffusion@sha256:f178fa7a1ae43a9a9af01b833b9d2ecf97b1bcb0acfd2dc5dd04895e042863f1 -i prompt="a pot of gold"

Initialization

1
cog init

Generate the main files.

1
2
├── cog.yaml # Similar to Docker file, defining the environment
├── predict.py # Inference code

Writing Code

Modify the code as follows.

cog.yaml is similar to a Docker file, defining the environment.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# Configuration for Cog ⚙️
# Reference: https://cog.run/yaml

build:
  # set to true if your model requires a GPU
  gpu: false

  # a list of ubuntu apt packages to install
  # system_packages:
  #   - "libgl1-mesa-glx"
  #   - "libglib2.0-0"

  # python version in the form '3.11' or '3.11.4'
  python_version: "3.10"

  # a list of packages in the format <package-name>==<version>
  # python_packages:
  #   - "numpy==1.19.4"
  #   - "torch==1.8.0"
  #   - "torchvision==0.9.0"

  # commands run after the environment is setup
  # run:
  #   - "echo env is ready!"
  #   - "echo another command if needed"

# predict.py defines how predictions are run on your model
predict: "predict.py:Predictor"

predict.py defines the inputs (name: str, scale: float), outputs (str), and the inference process.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Prediction interface for Cog ⚙️
# https://cog.run/python

from cog import BasePredictor, Input, Path

class Predictor(BasePredictor):
    def setup(self) -> None:
        """Load the model into memory to make running multiple predictions efficient"""
        # self.model = torch.load("./weights.pth")

    def predict(
        self,
        name: str = Input(description="Your name"),
        # image: Path = Input(description="Grayscale input image"),
        scale: float = Input(
            description="Factor to scale image by", ge=0, le=10, default=1.5
        ),
    ) -> str:
        """Run a single prediction on the model"""
        # processed_input = preprocess(image)
        # output = self.model(processed_image, scale)
        # return postprocess(output)
        return "hello " + name + " and scale " + str(scale)

Local Testing

Test it out.

1
cog predict -i name=Learn AI from Scratch

Output.

1
2
3
Starting Docker image cog-git-base and running setup()...
Running prediction...
hello Learn AI from Scratch and scale 1.5

Deployment

On the cloud create model

Push the model to the cloud.

1
2
cog login
cog push r8.im/<your-username>/<your-model-name>

Cloud Testing

cog-input

cog-output

Test successful!

Afterwards, you can call it via API.

Conclusion

This article demonstrates the entire process of using Cog to deploy to the cloud.

The example does not use a GPU. Check the documentation if needed.


Built with Hugo
Theme Stack designed by Jimmy