LiquidDinosaurs69
LiquidDinosaurs69 t1_j6kzkhm wrote
Reply to comment by hugio55 in Hobbyist: desired software to run evolution by hugio55
Check out Lenia artificial life simulator on YouTube. Similar concept to evolution, pretty sick. Might scratch your itch
LiquidDinosaurs69 t1_j69jpk3 wrote
Reply to Hobbyist: desired software to run evolution by hugio55
Actually, there aren’t evolution based locomotion optimizers (that I know of), but there are reinforcement learning based ones (much fast and more efficient). It actually probably won’t be very difficult. You just need to use stable-baselines3 reinforcement learning library and then use an existing openai gym format for whatever walking robot you want to experiment with. I think there are gyms available for humanoids and quadrupeds. You could get it running by following stable baselines3 tutorials.
Unfortunately, these optimize control policies, not the actual robot designs itself which I assume is what you want. Theo Jansen (strandbeest guy) used an evolutionary algorithm to come up with his design which I think is what you want to do. I’m not aware of any existing software that lets you do that though.
You could implement it yourself in python or C++ using robotics oriented rigid body dynamics libraries and solvers. But if you haven’t done this before it will be pretty hard.
This is actually something I want to do at some point too but I’m busy with other projects right now.
LiquidDinosaurs69 t1_j558t5l wrote
Reply to [D] Did YouTube just add upscaling? by Avelina9X
Woah
LiquidDinosaurs69 t1_j44wp7w wrote
Reply to [P] Creating A Code-Generating AI Model by Syntro42
It’s definitely infeasible to train and run inference on your own for a large language model. You would need many datacenter gpus. But you could maybe create an application that interfaces with a chatgpt api (or some other api accessible LLM)
LiquidDinosaurs69 t1_j1ef3xg wrote
Reply to comment by djc1000 in [D] Tensorflow vs Pytorch for LSTM stock bot by Careful-Temporary388
Not really but I can send you GitHub links to my repos.
LiquidDinosaurs69 t1_j1eexl1 wrote
Reply to comment by Careful-Temporary388 in [D] Tensorflow vs Pytorch for LSTM stock bot by Careful-Temporary388
I think some people have been able to conclusively show a profit with this actually. I know I’ve read at least one journal paper where they showed a profit. During my back testing I was actually able to get up to 2% profit per year which isn’t really worth it. Yeah I’ve heard using data from other sources like Twitter sentiments can work. Maybe weather would be useful too if you could find a way to use that data.
LiquidDinosaurs69 t1_j195usj wrote
Reply to comment by Careful-Temporary388 in [D] Tensorflow vs Pytorch for LSTM stock bot by Careful-Temporary388
RL is what you need to use when you want to learn to automatically act in an environment (buy and sell bitcoin in my case). Deep learning based RL requires a neural network for estimating the value of an action and for actor-critic methods there's also a policy network. So you can construct the neural networks that RL needs with an LSTM if you want.
I'm not sure if you want to create something that automatically figures out a strategy to buy and sell or if you just want to predict a stock price. If you just want to predict price then you don't need RL and just an LSTM will be sufficient.
I'm using stable baselines implementation of PPO (an RL algorithm). I'm using wavenet style stacked dilated convolutions as a feature extractor. It's not working though lol. I want to use an LSTM but stable baselines3 currently doesn't support it and I'm going to have to find a way to implement it myself.
LiquidDinosaurs69 t1_j17h0yw wrote
Reply to comment by djc1000 in [D] Tensorflow vs Pytorch for LSTM stock bot by Careful-Temporary388
Oh I should have read his post more carefully
LiquidDinosaurs69 t1_j16n0is wrote
I recommend PyTorch because stable baselines reinforcement learning library uses PyTorch. I’m using stable baselines3 to develop an automated Bitcoin trader. Stable baseline 3 is a pretty solid RL library.
LiquidDinosaurs69 t1_izxzzhr wrote
Yeah. The way to do it is you need to use math
LiquidDinosaurs69 t1_ira12g8 wrote
Reply to comment by amitraderinthemaking in [D] Fast inferencing in C++ for neural networks by amitraderinthemaking
Sounds cool. I’m just glad that the code I wrote for my grad school research might be useful for someone.
LiquidDinosaurs69 t1_ir8vdvj wrote
Reply to comment by LiquidDinosaurs69 in [D] Fast inferencing in C++ for neural networks by amitraderinthemaking
Actually, here’s the code where I implemented inference for my neural net if you’re interested. It’s very simple. https://github.com/jyurkanin/auvsl_dynamics/blob/float_model/src/TireNetwork.cpp
And here’s a handy script I made to help generate the c code for loading the weights into libeigen vectors. (Just use the print_c_network function) https://github.com/jyurkanin/auvsl_dynamics/blob/float_model/scripts/pretrain.py
Also look at my cmakelists.txt to make sure you had the compiler flags that will make your code run as fast as possible
LiquidDinosaurs69 t1_ir8v1nq wrote
Reply to comment by amitraderinthemaking in [D] Fast inferencing in C++ for neural networks by amitraderinthemaking
No I didn’t measure the time. But I had a network that had 2 hidden layers with 35 units per layer and I was using it as a component of a single threaded simulation that was running inference over 1000 times a second on an older CPU. Can I ask why you don’t want to use the gpu? Cuda would speed things up a lot if you need more speed.
LiquidDinosaurs69 t1_ir8t1kh wrote
Reply to comment by amitraderinthemaking in [D] Fast inferencing in C++ for neural networks by amitraderinthemaking
I did this for a simple feedforward neural net and it worked pretty good.
LiquidDinosaurs69 t1_ir7qovy wrote
Simply copy and paste the weights and biases into vectors in C++ and do the math yourself for inference. Unless your network is very big I believe this actually a pretty valid strategy
LiquidDinosaurs69 t1_jeg8yos wrote
Reply to Should I continue with this? by Eric-Cardozo
You should do something else. There are a lot of small C++ nn libraries. To make this one competitive with a real deep learning framework you would need to implement everything with gpu which would be painful. Also, python libraries also have the huge benefit of great data science libraries which make it much more convenient to preprocess data for training networks in python vs cpp.
Additionally there are ways to deploy python models to cpp so there’s not much benefit in training with a cpp library.