Viewing a single comment thread. View all comments

blablanonymous t1_j1msyd5 wrote

  1. Can’t you also create custom loss functions for XGBoost? I’ve never used it myself but it seems as easy as doing it for an ANN

  2. Is it always trivial to get meaningful embeddings? Does taking the last hidden layer of ANN guarantee that representation will be useful in many different contexts? I think it might need more work than you expect. I’m actually looking for a write up about what conditions needs to be met for a hidden layer to provide meaningful embeddings. I think using a triplet loss intuitively favors that but I’m not sure in general.

  3. XGBoost allows for this too, doesn’t it? The scikit-learn API definitely at least let’s you create MultiOutput models very easily. Granted it can be silly to have multiple models under the hood but whatever works.

Sorry I’m playing devil’s advocate here, but the vibe I’m getting from your post is that you’re excited to finally getting to play with DNN. Which I can relate to. But don’t get lost in that intellectual excitement: at the end of the day, people want you to solve a business problem. The fastest you can get to a good solution the better.

In the end it’s all about trade offs. People who employ you just want the best value for their money.

44