I_will_delete_myself

I_will_delete_myself t1_jad9amj wrote

ChatGPT uses GPT-3.5, which is a pre-trained model. Google uses pertained models. Facebook created a pre-trained model recently.

If these models satisfy their needs it will definitely satisfy yours. Unless if you are going beyond a kind of problem that hasn't been tackled before, a pre-trained model will save you so much time training and require a lot less data to get it to converge and actually be useful.

2

I_will_delete_myself OP t1_j9fp5fh wrote

Try looking into if they have an API. shutdown is rare, but it happens so I only ran into it once. Having the cloud on your mobile device is great, it allows you to check anywhere and do some simple things quickly.

1

I_will_delete_myself OP t1_j9fodao wrote

>Can you recommend a tutorial or something that explains the steps to move from (e.g. pytorch) training on your own machine to training that model in the Cloud (e.g. AWS)?

Same as running on your own machine.

>What type of instances to chose, how/where to store data, making sure Nvidia/CUDA stuff is working properly, etc.?

Just look up a EC2 or VM that has the gpu you want and there you go. nvidia-smi is the command that should tell you the gpu you have. It's working if it outputs the GPU you have. I would suggest checking in the code if CUDA is running.

I prefer to use a EC2 or VM because it's normally cheaper, but you have to do your own research on pricing. Cloud is a competitive market, so there is always someone ready to offer a A100 at a cheaper price. Lambda Cloud I heard was super cheap for on demand.

1

I_will_delete_myself OP t1_j98p0vg wrote

I been running the A100 the entire weekend and so far it’s only costing me under 20 bucks. If you need it around an hour and it would probably cost you between 1-3 dollars

I would recommend you plan a budget before you get started and it will almost always be cheaper on a year basis. Try Colab first and see if you will need it longer than 12 hours.

6

I_will_delete_myself t1_j8zvipt wrote

Availability in GPU is terrible in paper space. I would rather get colab for that and a VM for heavy loads. I got a refund when it took me a day to find a GPU. I don't have time to watch 24/7 for a GPU that is snagged in seconds. This was in the payed option.

1

I_will_delete_myself t1_j5b4ccq wrote

It doesn't make any sense to run neural network on the client side at all. Youtube takes a moment to process your video before it gets uploaded, which is probably when their deep learning algorithms get to work. After that you just save the frames and don't run the neural networks again.

This is a valid guess because it takes a lot longer to upload a video on Youtube in comparison to other platforms that do no checks at all.

1

I_will_delete_myself t1_j4557ug wrote

Correct me if I am wrong AI: Niche part of ML ML: AI + Data Science

Edit: An “intelligent” computer uses AI to think like a human and perform tasks on its own. Machine learning is how a computer system develops its intelligence

https://azure.microsoft.com/en-us/solutions/ai/artificial-intelligence-vs-machine-learning/

−26