elehman839
elehman839 t1_jeabdzf wrote
It is also happening with trees! Look.
I'm trying to make a serious point.... Probably the article has some validity, but the image blocks inside the article add a lot of weight to the argument. And generating blocks of similar-looking images is pretty easy with search-by-image. This instantly creates a "world is all the same" effect.
For example, look at the block of apartment building images. Every single photo has a white sky (have skies really become more uniform lately?) and the pavement looks like it just rained in six (has more of the world really entered a just-stopped-raining phase?). This kinda looks like an assembly of photographs produced with search-by-image.
Sooo... that seems a little unfortunate to me.
elehman839 t1_jdt94ba wrote
Reply to comment by ArcticWinterZzZ in Why is maths so hard for LLMs? by RadioFreeAmerika
Here's a neat illustration of this. Ask ChatGPT to multiply any two four-digit numbers. For example:
Input: 3742 * 7573
Output: The product of 3742 and 7573 is 28350686
The correct answer is 28338166. The bolded digits are right, and the plain digits are wrong. So it gets the first bit right, the last bit right, and the middle bit wrong. This seems to be very consistent.
Why is this? In general, computing the first digits and the last digits requires less computation than the middle digits. For example:
- Determining that that last digit should be a 6 is easy: notice that the last digits of the multiplied numbers are 2 and 3 and 2 * 3 = 6.
- Similarly, it is easy to see that 3000-something times 7000-something should start with a 2, because 3 * 7 = 20-something.
- But figuring out that the middle digits of the answer are 38 is far harder, because every digit of the input has to be combined with every other digit.
So I think what we're seeing here is ChatGPT hitting a "compute per emitted token" limit. It has enough compute to get the leading digits and the trailing digits, but not the middle digits. Again, this seems to be quite reliable.
elehman839 t1_jdpxkm9 wrote
Reply to comment by czl in Nvidia Speeds Key Chipmaking Computation by 40x by Vucea
Sounds like the computation may sometimes need to be done multiple times per design:
Even a change to the thickness of a material can lead to the need for a new set of photomasks
Moreover, it sounds you can also get better chips, not just the same chip sooner. Prior to this speedup, inverse lithography could be practically used in only certain parts of the design:
it’s such a slog that it’s often reserved for use on only a few critical layers of leading-edge chips or just particularly thorny bits of them
Furthermore, you can get an increased yield of functional parts, which should lower manufacturing cost:
That depth of focus should lead to less variation across the wafer and therefore a greater yield of working chips per wafer
elehman839 t1_jdollr0 wrote
Reply to comment by The_One_Who_Slays in A recently submitted paper has demonstrated that Stable Diffusion can accurately reconstruct images from fMRI scans, effectively allowing it to "read people's minds". by iboughtarock
If anyone cares: I found Appendix B, but there wasn't much more helpful information. In particular, I don't understand how the randomly-generated images in their evaluation process were produced. And, as far as I can tell, the significance of the paper comes down to that detail.
- If the randomly-generated images were systematically defective in any way, then the 80% result is meaningless.
- On the other hand, if these randomly-generated images are fairly close to the image shown to the person in the fMRI-- but just differing in some subtle ways-- then 80% would be absolutely amazing.
Sooo... I think there's something moderately cool here, but I don't see a way to conclude more (or less) from than that from their paper. Frustrating. :-/
elehman839 t1_jdmt4om wrote
Reply to comment by The_One_Who_Slays in A recently submitted paper has demonstrated that Stable Diffusion can accurately reconstruct images from fMRI scans, effectively allowing it to "read people's minds". by iboughtarock
The claims are interesting, but far more modest than people here seem to realize. This is what they say about their evaluation process:
we conducted two-way identification experiments: examined whether the image reconstructed from fMRI was more similar to the corresponding original image than randomly picked reconstructed image. See Appendix B for details and additional results.
So, if I understand correctly, they claim that if you take a randomly-generated image and an image generated by their system from an fMRI scan, then their generated image more closely matches what the subject actually saw than the randomly-generated image only 80% of the time.
This is statistically significant (random guessing would give only 50%), but the practical significance seems pretty low. In particular, that's waaaay far form a pixel-perfect image of what you're dreaming. The paper has only cherry-picked examples. The full evaluation results are apparently in Appendix B, which I can not locate. (I'm wondering wether the randomly-generated images had some telling defect, for example.) Also, the paper seems measured, but this institution seems to very aggressively seek press coverage.
elehman839 t1_jd8gnav wrote
Reply to Persuasive piece by Robert Wright. Worrying about the rapid advancement of AI no longer makes you a kook. by OpenlyFallible
To me, "AI turns evil" scenarios seem a ways out. The more-probable scenario in the near term that concerns me is nasty PEOPLE repurposing AI to nasty ends. There are vile people who make malware and ransomware. There are people who've lived wretched lives, are angry about that, and just want to inflict damage wherever they can. These folks may make up 0.001% of the population, but that's still a lot of people worldwide.
So how are these folks going to use AI to cause as much damage as possible? If they had control of an AI, they could give it the darkest possible intentions. Maybe something like, "befriend people online and over a period of months then gradually start undermining their sense of self worth and encourage them to commit suicide". Or "relentlessly make calm, rational-sounding arguments in many online forums under many identities that <some population group> is evil and should be killed".
As long as AI is super compute-intensive, there will be check on this behavior. If you're running on BigCorp's cloud service, they can terminate your service. But when decent AI can run on personally-owned hardware, I think we're almost certain to see horrific stuff like this. This may not end the world, but it will be quite unpleasant.
elehman839 t1_jceao49 wrote
Reply to comment by AverageCowboyCentaur in Microsoft Cuts Team Focused on AI Ethics, Report Says by PmButtPics4ADrawing
Microsoft cut a particular "Ethics and Society" team, but the original article notes:
Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company’s AI initiatives. The company says its overall investment in responsibility work is increasing despite the recent layoffs.
While surely a big deal for people on the eliminated team, this sounds like a minor reshuffle for Microsoft.
In general, figuring out what an effective "responsible AI" team actually does on a day-to-day basis has been a puzzle industry-wide. My impression is that there's been a fair amount of experimentation, and this action might be shutting down a particular approach that is now perceived by Microsoft leadership as less effective than alternatives.
Or else GPT4 hacked the HR system and liquidated some potential foes.
elehman839 t1_jcdjbmg wrote
Reply to comment by wywywywy in [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Researchers and engineers seem to be moving from one organization to another pretty rapidly right now. Hopefully, that undermines efforts to keep technology proprietary.
elehman839 t1_jcbb9cg wrote
Two comments:
- GPT-4 shows the pace of progress in ML/AI. You couldn't conclude much about the trajectory of progress based on ChatGPT alone, but you can draw a (very steep) line between two datapoints and reach the obvious conclusion, i.e. holy s**t.
- Science fiction is a mess. Real technology looks set to overtake fantasies about the far future. What can you even imagine about thinking machines that now seems clearly out of reach?
elehman839 t1_jcb9vwd wrote
Reply to comment by Rohit901 in OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art by donnygel
I suspect Microsoft faces a conundrum:
- They want to use GPT models to convince more people to use Bing in hopes of getting billions in ad revenue.
- But operating GPT models is insanely compute-intensive. I bet every GPU they can find is already running cook-eggs hot, and they are asking employees to rummage around in toyboxes at home for old solar-powered calculators to get a few more FLOPs.
- Given this compute situation, as more people use Bing, they will have to increasingly dumb down the underlying GPT model.
elehman839 t1_j8nlzms wrote
Reply to comment by BigMax in US’s first solar panels over canals pilot will deploy iron flow batteries by For_All_Humanity
Just for interest, the power storage systems they are planning to use are described here. The copyright on the datasheet is 2023, so I gather this is pretty new.
Each system comes in a semi-truck trailer-- pretty big. The rated capacity is 400 kWh. To put that in perspective, the capacity of a Tesla Model Y is 75 kWh. So a semi-trailer sized battery can charge only 5-6 electric cars from 0 to full.
This makes sense, I suppose. The battery must make up a large part of an electric car, and such batteries probably have higher energy density than these storage systems. Still, I guess it is good to know that a semi-sized battery is not going to power a small city or something, but rather maybe charge all the electric cars on one block.
elehman839 t1_j6nz6j5 wrote
Reply to comment by khamelean in Why AI can not replace search index by shanoshamanizum
Yeah, the criteria that Google uses to evaluate its search results are documented publicly and in exhausting detail right here:
If you have a week to spare, you can read through and decide which principles you agree with and which you don't like. And you can call them either "biases" or "principles" depending on your mood.
Either way, I do not think there is yet anything remotely this comprehensive (and public) guiding the behavior of language models.
elehman839 t1_j6epsdz wrote
Reply to comment by bogglingsnog in Google’s MusicLM is Astoundingly Good at Making AI-Generated Music, But They’re Not Releasing it Due to Copyright Concerns by Royal-Recognition493
Reminds me to this awesome old video: https://youtu.be/5pidokakU4I?t=50
elehman839 t1_j60e9xn wrote
Reply to Are most of our predictions wrong? by Sasuke_1738
For people not watching the AI/ML field closely (which is most, I suppose), I can imagine it felt like ChatGPT just "popped up". One day, AI was science fiction, and the next day it was (with qualifications) here.
But for people working in this field or watching closely, ChatGPT is just one of many, many data points on a performance curve that has been rising rapidly and consistently for about five years. Leaderboards on test suites (like this one) were skyrocketing month-by-month. So predicting that something like ChatGPT was coming was as simple as, "I bet that line that keeps going up is gonna keep going up."
One implication is that ChatGPT is almost surely not the end of the line; rather, we can be near-certain that ChatGPT will be "old school" by this fall.
elehman839 t1_jebley2 wrote
Reply to comment by samwell_4548 in Thought experiment: we're only [x] # of hardware improvements away from "AGI" by yeah_i_am_new_here
Yes, and I think this reflects an interesting "environmental" difference experienced by humans and AIs.
Complex living creatures (like humans) exist for a long time in a changing world, and so they need to continuously learn and adapt to change. Now, to some extent, we do follow the model of, "Spend N years getting trained and then M years reaping the benefit", but that's only a subtle shift in emphasis, not a black-and-white thing as for ML training vs. inference.
In contrast, AI developed largely for short-term, high-volume applications. In that setting, it makes sense to spend spend a lot of upfront time on training, because you're going to effectively clone the thing and run it a billion times, amortizing the training cost. And giving it continuous learning ability isn't that useful, because each application lasts only minutes, seconds, or even milliseconds.
Making persistent AI that continuously learns and remembers seems like a cool problem! I'm sure this will require some new ideas, but with the number of smart people now engaged in the area, I bet those will come quickly-- if there's sufficient market demand. An I can believe that there might be...