sckuzzle

sckuzzle t1_j55dfsu wrote

Reply to comment by asuds in Darwin’s closing by jimpaulmitsi

> And this is all because he didn’t want to negotiate with unionized workers?

No. They negotiated. They weren't able to come to a compromise, and Darwin's went out of business instead.

This means that the union was asking for enough that the store literally went out of business. Not that they weren't willing to negotiate.

18

sckuzzle t1_ixv2r4c wrote

It probably accurately describes the higher end of the street value, yea. But representing drugs by their street value is like representing a crate of apples as having a value of $1000, because they could be turned into apple pies and sold at Marie Calendars.

These guys had the apples and they had pie crust and they had ovens, so they were definitely planning on turning the apples into apple pies. But there's a lot of work that has to be done between having the raw materials and actually having that much in cash.

7

sckuzzle t1_ixv0dam wrote

> They also didn't adhere to strict lab conditions to assure that each pilll contained the same dose.

Oh, absolutely. Fentanyl is dangerous and we should harshly punish those who intentionally peddle in it. But we don't have to misrepresent how bad this is by saying there was enough fent there to kill everyone in the state three times over when there wasn't.

Justice comes with accurately representing the facts.

25

sckuzzle t1_ixuyc32 wrote

> Are you suggesting they track down each granule? Nobody does pure fentanyl. It’s all cut.

Of course they shouldn't track down the grains. I'm saying that at minimum they should have said that they found 100 lbs of powder that contained fentanyl. Saying it's 100 lbs of fentanyl simply is not factually accurate and also intentionally misrepresenting what was seized.

>and the fact that you’re trying to minimize it is pretty disgusting.

Mmmm, yes, that's what I'm doing. Calling out lies means that I must be pro-murder, right? Cool. Got it.

11

sckuzzle t1_ixuxt36 wrote

> the entirety of any mixture containing a drug is counted as that drug

for sentencing purposes. Laws can't change the physical reality of the world. And the physical reality is that there weren't 100 lbs of fentanyl there, no matter what some law says about how long someone should be thrown in jail for. And the police are absolutely still lying (not misleading) about how much fent there is. It's both not the "legal definition" and not the truth.

>And really, less pure = more profit

lol.

−6

sckuzzle t1_iw9fe1i wrote

Writing "short" code isn't a always good thing. Yes your suggestion has less lines, but:

  • It takes ~6 times as long to run

  • It does not return the correct output (split does not take every nth value, but rather groups it into n groups)

I'm absolutely not claiming my code was optimized, but it did clearly show the steps required to calculate the necessary output, so it was easy to understand. Writing "short" code is much more difficult to understand what is happening, and often leads to a bug (as seen here). Also, depending on how you are doing it, it often takes longer to run (the way it was implemented requires it to do extra steps which aren't necessary).

1

sckuzzle t1_iw8jv72 wrote

Why are you using a "model" / MLPs at all for this? This is a strictly data processing problem with no model creation required.

Just process your data by throwing away 75% of it, then take the max, then check if each value is equal to the maximum.

Something like (python):

import numpy as np

def process_data(input_array):
  every_fourth = []
  for i in range(len(input_array)):
    if (i+1)%4==0:
      every_fourth.append(input_array[i])
  max_value = max(every_fourth)
  matching_values = (np.array(every_fourth) == max_value)
  return matching_values
8

sckuzzle t1_ivbjs9d wrote

If I were to approach this, I'd train them at the same time. You have two models - one for each side - each with their own reward functions. Then you'd train them in parallel, playing against each other as they go.

It's a bit of a challenge because you can only train them relative to the strength of the other - so you need them both to get "smarter" in order to continue their training. But that's no different than a model that self-trains against itself.

2

sckuzzle t1_ivbiskk wrote

> so it must understand both strategies about equally regardless of which side it's playing

What do you mean here by "understand"? My understanding is that the state-of-the-art AI has no concept of what the capabilities of its opponent are or even what its opponent might be thinking; it only understands how to react in order to maximize a score.

So while you could train it to react well no matter which side it is playing, how would it benefit from being able to play the other side better? It would need to spin up a duplicate of itself to play the other side and then analyze itself to understand what is happening, but then it would just get into an infinite loop as it's duplicate self spins up its own duplicate.

I guess what I'm getting at is that these AI algorithms have no theory of mind. They are simple stimulus-react models. Even the concept of an opposing player is beyond them - it'd be the same whether it was playing solitaire or chess.

1