londons_explorer

londons_explorer t1_j6h7kmh wrote

Doesn't look awfully hard to repair... Just replace the ties, which are standard parts you can bring in in bulk by train.

Replacing just say 1 in 10 will allow you to use the railway for light vehicles, which then allows you to do the rest of the repair all in parallel.

−10

londons_explorer t1_j6al3tb wrote

>They were not able to find significant improvements with scaling anymore.

GPT-3 has a window size of 2048 tokens ChatGPT has a window size of 8192 tokens. The compute cost is superliner, so I suspect the compute required for ChatGPT is a minimum of 10x what GPT-3 used. And GPT-3 cost ~12M USD. (At market rates - I assume they got a deep discount)

So I suspect they did scale compute as much as they could afford.

4

londons_explorer t1_j4xknrt wrote

If you want to make the assumption that most buildings don't have any curves in their roofs...

Then take your point cloud, extract the largest polygons... There are classical algorithms for such things.

From the polygons, turning that into a plan should be quite straightforward.

While ML could be applied... I think you'll get better results quicker with classical methods.

3

londons_explorer t1_j3z5oe6 wrote

> Industry convenience should not trump public health and yet it occurs on a regular basis.

The regulatory body for making this decision is the FAA. The FAA banning leaded fuels is like NASA banning rockets or the Treasury banning dollars.

The real question is why have we the people set the system up so that the regulatory body making this decision doesn't have the people's best interests in mind?

8

londons_explorer t1_j3qd9yb wrote

> too early to consider diffusion as a serious alternative to autoregression for generative language modelling at scale

This blog post explores lots of ideas and has conjectures about why they may or may not work...

But it seems this stuff could just be tried.... Burn up some TPU credits and simply run each of the types of model you talk about and see which does best.

Hard numbers are better than conjecture. Then focus future efforts on improving the best numbers.

2

londons_explorer t1_j28kvqp wrote

There is no latency constraint - it's a pure streaming operation, and total data to be transferred is 1 gigabyte for the whole set of vectors - which is well within the read performance of apples ssd's.

This is also the naive approach - there are probably smarter approaches by doing an approximate search with very low resolution vectors (eg. 3 bit depth), and then a 2nd pass of the high resolution vectors of only the most promising few thousand results.

3

londons_explorer t1_j23cp3x wrote

I think 60 minutes is crap.

I want deliveries within 5 minutes.

The products can already be boxed and attached to a drone. And then when someone orders that product, the drone takes off and flies at 100 mph to your address.

100 mph for 5 minutes is 8 miles, which covers a 16 mile diameter circle, a few of which would fully cover even large cities.

And there are plenty of drones which can fly 100 mph - world record small drones go nearly 200 mph.

1