Submitted by FunQuarter3511 t3_11ugj0f in deeplearning
hijacked_mojo t1_jcpstsu wrote
Reply to comment by FunQuarter3511 in Question on Attention by FunQuarter3511
Yes, you have the right idea but also add this to your mental model: the queries and values are influenced by their *own* set of weights. So it's not only the keys getting modified, but also queries and values.
In other words, the queries, keys and values weights all get adjusted via backprop to minimize the error. So it's entirely possible on a backprop that the value weights get modified a lot (for example) while the key weights are changed little.
It's all about giving the network the "freedom" to adjust itself to best minimize the error.
FunQuarter3511 OP t1_jcptitk wrote
That makes a ton of sense. Thanks for your help! You are a legend!
Viewing a single comment thread. View all comments