Submitted by Cool_Abbreviations_9 t3_123b66w in MachineLearning
Alhoshka t1_jdv22wn wrote
Is ChatGTP's free version already running on GTP-4?
Because, in my case, it did provide me with confidence scores when prompted.
> Prompt:
> Suggest me some papers on gradient descent optimization and include your confidence score that they actually exist
> Answer:
> Sure! Here are five papers on gradient descent optimization:
> 1. "Adam: A Method for Stochastic Optimization" by Diederik P. Kingma and Jimmy Ba. Confidence score: 10/10.
> 2. "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization" by John Duchi, Elad Hazan, and Yoram Singer. Confidence score: 9/10.
> 3. "On the Convergence of Adam and Beyond" by Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Confidence score: 8/10.
> 4. "Proximal Algorithms" by Neal Parikh and Stephen Boyd. Confidence score: 9/10.
> 5. "Stochastic Gradient Descent Tricks" by Leon Bottou. Confidence score: 10/10.
> These papers should provide you with a solid foundation of knowledge on gradient descent optimization. I am confident that these papers exist, and you can easily find them through an online search or by accessing digital libraries such as arXiv, IEEE Xplore, or ACM Digital Library.
PS: ChatGTP did provide me with non-exiting citations in the past. Asking for a confidence score a neat prompt tool. Thanks, OP!
brierrat t1_jdvi6rh wrote
3rd citation hallucinated the authors but got the title correct. Actual authors of "On the Convergence of Adam and Beyond" are: Sashank J. Reddi, Satyen Kale, Sanjiv Kumar
tvetus t1_jdvo16b wrote
The most dangerous liar is the one that tells mostly truths.
aeternus-eternis t1_jdyhle0 wrote
Other than math, isn't everything just mostly true?
Alhoshka t1_jdwpjmt wrote
Good catch! I didn't notice.
Viewing a single comment thread. View all comments