Inevitable_Host_1446
Inevitable_Host_1446 t1_iz50q5y wrote
Reply to [D] Is there an affordable way to host a diffusers Stable Diffusion model publicly on the Internet for "real-time"-inference? (CPU or Serverless GPU?) by OkOkPlayer
Assuming speed is your problem rather than wanting to share it with others, a decent longer term solution would be to put together a cheap rig with a 3060 12gb. It has enough vram / tensor cores to do pretty well, relative to price, at least orders of magnitude faster than cpu.
Inevitable_Host_1446 t1_iz517w5 wrote
Reply to [D] OpenAI’s ChatGPT is unbelievable good in telling stories! by Far_Pineapple770
Assuming this isn't edited, that is quite remarkable. I use GPT-NeoX / Fairseq quite a lot for writing stories but it is no where near this level in terms of understanding.