How to build your own ChatGPT alternative on the cheap? There’s one way



For a couple of thousand of dollars a month, you can now reserve the capacity of a single Nvidia HGX H100 GPU through a company called CoreWeave. The H100 is the successor of the A100, the GPU that was instrumental in training ChatGPT on LLM (Large Language models). Prices start at $2.33 per hour, that’s $56 per day or about $20,000 a year; in comparison, a single HGX H100 costs about $28,000 on the open market (NVH100TCGPU-KIT) and less wholesale.

You will pay extra for spot prices ($4.76 per hour) and while a cheaper SKU is available (HGX H100 PCIe, as opposed to the NVLINK model), it cannot be ordered yet. A valid GPU instance configuration must include at least one GPU, at least one vCPU and at least 2GB of RAM. When deploying a Virtual Private Server (VPS), the GPU instance configuration must also include at least 40GB of root disk NVMe tier storage.



Source link

Previous articleGovernor Josh Green, M.D. | ATTORNEY GENERAL WARNS OF FAKE FBI BITCOIN PHONE SCAM INVOLVING “SPOOFED” DEPARTMENT OF THE ATTORNEY GENERAL PHONE NUMBER
Next articleResident Evil 4 remake: How to deface Ramón’s portrait.