I wonder. Can you ask chatGPT for signals too?
- | Joined Jan 2023 | Status: Member | 47 Posts
It is not about what you think, but what you see and your actions after!
which fibonn was vegas talking about? 1 reply
Changing Your Self Talk By Talking to Yourself 2 replies
My left brain starts talking to me 7 replies
talking to myself(smile) 8 replies
DislikedHey joyny! I'm playing with openai now hard. Yesterday I was googling, then stackoverflowing, then checking the things and today I just ask AI to make a code for me. And it works good. At least much faster than first approch. But I have faced some issues. For example I was trying to implement ClickHouse database. And I asked it how to delete all data in the database > 30 min for example. And you know what? I tried 100 times and it always told me "DELETE FROM...". But it is not correct! Correct is ALTER...Ignored
Disliked{quote} Openai have data on 2021. And no updating with fresh up to date info for now.Ignored
Disliked{quote} seems rather useless except for learning general/approved information. The benefits being good understanding and fast and highlighted points. It doesn’t want to give the same shortcuts to making money, such as the big banks use everyday to a commoner/ordinary citizen. Am i using the wrong version? I use chatGPT. Or did it`s code change?Ignored
Disliked{quote} openAI have data only till 2021. But.. it is possible to use their API to fine-tune with fresh learning data for AI. This requires coding skills.Ignored
QuoteDislikedAssuming you use a data set of 200,000 examples and training iterations of 10, the cost to fine-tune GPT-3 would be about $200
Disliked{quote} Openai is not 100% reliable. I had several issues with it.. for example ai could not say correct that trading 0.01 lots on 100 USD account is same risk as 1 lot on 10000 usd account. Had to ask additional questions till finally admits that risk % the same. But worst incident so far was when asked top gold owners central banks and it first said that China 68 mill tons and Russia 46 mill tons and then US with 8 mill... When confronted ai with info from google it admited error. But could not provide sources for wrong info... This is bad.. should...Ignored
Disliked{quote} It is a cunning liar and every 'fact' that you get from it needs to be double-checked. It can sound very reasonable and be completely wrong.Ignored
Dislikedwill try to make smaller files for training and then validation but... this wont be usable for trading then.. just to see how training works..and then how AI responses will work.. will move to most cheapest model "Ada".. maybe that will help too (but again.. maybe worse trading results then..)Ignored
QuoteDislikedMicrosoft and NVIDIA have a long history of working together to support financial institutions by providing cloud, hardware, platforms, and software to support algorithmic trading. Microsoft Azure cloud, NVIDIA GPUs and NVIDIA AI provide scalable, accelerated resources as well as routines, and libraries for automating quantitative analysis...
Dislikedall done. fine-tuning ended.. but.. results are weird.. when I send my special prompt.. AI did not reply with one of 3 answers expected (as in the training data provided) but returned this mess instead: SELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSELLSEWAWAITSEWAITWAITWAITWAITWAITWAITWAITWAITWAITWAITWAITWAITWAITWAITWAWAITWAITWAITWAITWAITWAITWAITWAITWAITWAITWAITWAITWAITWAITWAITWAWAITWAWAITWAWAITWAWAITWAWAITWAWAITWAWAITWAWAITWAWAITWAWAITWAWAITWAWAWAWAITWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWAWA...Ignored
DislikedJesus take the wheel! {quote} There's one catch of many catches. You trained your model on a training set and tested it on a test set. The moment you start to forward test one bar into the future your model begins to diverge. If you do not continuously retrain your model on the most recent data your model will accumulate divergence and perform worse and worse. There's one cool example I mentioned in passing in my older neural network strategy post. You create a test model by training it on a set of predictable data where you predict one column that...Ignored
Disliked{quote} Do you also get the same model performance if you train it on exactly the same training and test datasets? If you do, it's not because you've found the most amazing model. It's because it's not randomly seeded each time you run it. Once you fix the randomness of weight seeding you will see that every time you get a different model from the same training and test datasets that performs differently each time.Ignored