breckenridgeback

breckenridgeback t1_j6661ml wrote

> Uphill. Water soaks into the ground and flows within it, on top of the bedrock which is impermeable. When the terrain allows it can emerge from the surface lower down as a spring.

This is one means of creating a spring, but not the only one.

Another means is an artesian spring, where water pressure inside the rock can force water upward a short distance. The ultimate source of energy is still gravity (the source of the pressure is water higher up trying to force its way down), but the water doesn't flow exclusively downhill.

5

breckenridgeback t1_j64uv1t wrote

It varies - TVTropes has a long list of examples. One common convention when media is made in language A and uses language B, then gets translated into language B, is to translate language A to language B and language B to some other language. For example, the TVTropes page lists an anime example where an English teacher in the original Japanese becomes a Spanish teacher in the English dub, which is roughly equivalent in terms of the role they'd play in a character's life (they teach a language most people are familiar with but might not actually speak fluently).

Sometimes it's even inconsistent within a work. They list a French dub of Pearl Harbor that translates English to French, but leaves Japanese as Japanese (which would align with the experience as created for English-speaking audiences, who would have understood the English bits but not the Japanese bits).

See also their pages on cultural translation or Woolseyism, where a work isn't even translated directly, but is instead translated to keep the same feelings, sense, or audience reaction.

2

breckenridgeback t1_j64l9qz wrote

> Geography would have a very negligible effect at that altitude. It's a fixed altitude above sea level and doesn't fluctuate

But just because it's an arbitrary human definition. There are dynamics in the upper atmosphere that matter for some purposes, but the atmosphere in the sense of "has dynamics that matter sometimes" extends far into what we think of as "space".

11

breckenridgeback t1_j5zbkuo wrote

It's actually pretty rare for a startup to be turning enough of a profit (or indeed, any at all, although that's changing recently) for investors to make money directly off of dividends. Their money usually comes by selling their share of the company during an acquisition or IPO.

1

breckenridgeback t1_j5z8pux wrote

An acquisition usually means buying the existing shares. Since initial investors got their shares at very low valuation (roughly, "the stock price was low", though the company isn't publicly traded at that stage), a high-value acquisition usually buys shares for much more than the investor paid for them. That means the investor gains money.

The same goes for an IPO. In that case, the stock becomes publicly traded, usually at an initial price far above the valuation per share at the time the investor invested, and the investor can sell their stock for much more than they bought it for.

In broader terms: the investor owns part of the company as part of their investment. If the company becomes worth more, their share also becomes worth more.

7

breckenridgeback t1_j5ktnyp wrote

> I know this, im just wondering if d is a part of the valence shell of that energy level.

The 3d shell is on the same level as the 3s and 3p orbitals in a hydrogen atom, but in the larger atoms that actually fill the 3d shell in their ground state, it turns out that the 3d shell ends up much higher-energy than the 3s and 3p shells because of how it interacts with other electrons. It ends up with an energy between the 4s and 4p orbitals instead, so for the purposes of the periodic table, you can think of 3d as hanging out in the fourth "shell" instead.

3

breckenridgeback t1_j5kt2u0 wrote

The notion of "shells" is a simplification.

You are familiar, I imagine, with the idea of the s-, p-, d-, and f-orbitals, since you mention them in your post. These correspond to ℓ = 0, 1, 2, and 3 respectively in the state of an electron. And they come in different levels, given by the value of n, also part of the description of an electron. So for example, the 2s subshell corresponds to n = 2 and ℓ = 0.

As a broad rule, these subshells are filled in the following order:

  • Subshells with lower n + ℓ are filled first.
  • For subshells with the same n + ℓ, start with the lowest n.

That results in the order 1s, 2s, 2p, 3s, 3p for the first three rows (with n+ℓ values of 1, 2, 3, 3, 4, 4 respectively). But once you get to the next row, the first that contains d electrons, it goes 4s, 3d, 4p. It doesn't "skip" d, it's just that the d it's filling in the fourth row of the periodic table is the 3d subshell, not the 4d one. The noble gas in that row (krypton, as it happens) does in fact have its highest occupied d shell filled. It has the electron configuration [Ar] 4s^(2) 3d^(10) 4p^(6) - this 3d subshell is full.

The reason the 3d shell shows up between 4s and 4p here, even though the n is nominally an energy level, is that that these numbers describe the energies of an orbital in the absence of other electrons. But other electrons in the atom jostle energy levels quite a bit. It turns out that when lower orbitals are occupied, d- orbitals end up so high energy that they effectively get "bumped up a tier" of the table.

It doesn't have any 4d electrons yet, because 4d electrons would be much higher energy than the 4s, 3d, and 4p electrons it actually has.


The better way to think about this is in terms of the gaps in energy levels. Noble gases have a large gap between the energy level of the highest-energy electron they have and the next available electron slot. That makes sticking an electron to them hard, because that electron has to occupy a high-energy state. And it makes stripping an electron off of them hard too, because all the electrons they have occupy low energy states. Their configurations look like this (where the blue lines represent energy of occupied orbitals, and red represents unoccupied orbitals).

As you go down the periodic table, the notion of "shells" starts to become less useful, because the gaps between the shells shrinks enough that the gaps within the shells can start to cause them to spill over one another. So chemistry near the bottom of the periodic table becomes more complicated, and in fact it's generally believed that if element 118 - which is one of the noble gases by its position on the periodic table - would actually be a solid if it were stable enough to stick around and have any chemistry at all.

3

breckenridgeback t1_j29ui63 wrote

At the level you're talking about, the idea of "empty space" barely even makes sense.

You probably think about particles as being little solid billiard balls that bounce off of each other. That model can work OK for thinking about some things, but once you get down to the scale of atoms, it breaks down fast.

The state of an electron in an atom isn't a ball orbiting a nucleus. Instead, an electron is smeared-out "cloud" with a particular sort of shape. You can think of this as the electron being a billiard ball that has a certain probability of being at any given point, but thinking of the cloud as the basic truth and the billiard ball as an approximation is closer to the reality. Insofar as the billiard ball model works, electrons (and all other elementary particles) have zero size, but clearly they are taking up space in some sense, so we need to set aside that model if we want to talk about ideas of taking up space.

Instead, when we think of space being occupied, we mean something like "if you try to put something else there, it'll push back". This is why you don't fall through your wall. And it's at the heart of your question.

The reason that your atoms do not fall through the wall's atoms is twofold:

  • First, the electrons in your atoms and the electrons in the wall's atoms are both negatively charged, and they get close to each other before they get close to the positively charged protons in the other atoms. That creates an electric repulsion that stops the two from getting too close, or from your atoms passing through the wall's.

  • Second, the electron clouds take up space, in the sense that compressing a cloud takes energy. When you get very close to the wall, the electron clouds in your atoms start to press against the electron clouds in the wall's atoms. The clouds resist compression, which can hold you up in much the same way that a spring can hold up a weight.

Both of these effects contribute a fair amount to the "solidness" of solid objects.

3

breckenridgeback t1_j29tf9m wrote

Reply to comment by phiwong in eli5 Atoms being mostly empty by NTOK21

In fact, this is only part of the story. Degeneracy pressure contributes pretty meaningfully to the "solidness" of solids, so insofar as the idea of "empty space" even makes sense, atoms are not "mostly empty".

2

breckenridgeback t1_j20o9uq wrote

The AU and parsec, the two most commonly used astronomical distance measures, come from the fact that they depend on the most easily-measured things in the Universe: the parameters of Earth's orbit itself.

The AU is the distance from the Earth to the Sun. Geometry can easily tell us that, say, Venus is about two-thirds as far from the Sun as we are, or that Mars is about half again as far, but it can't actually tell us the Earth-Sun distance directly without some other pieces of information. Thus, a lot of the details of the solar system got worked out using the Earth-Sun distance (that is, 1 AU) as a baseline; as our estimates of the AU got better, so too did our estimates of other things.

Similarly, the parsec also depends on the Earth's orbit. In that sense, the AU and the parsec are closely related. Specifically, the parsec is the distance at which an object's position in the sky changes by 1 second of arc (1 / 3600 of a degree) over a distance of 1 AU. Mathematically, that means it's 1 AU times cotangent(1/3600 degree) = 1 AU times ~206,264, which works out to a little over 3 light-years. We use the parsec because measuring these angles is how we first established how far away the stars are, which let us develop systems for figuring out the distance to more distant stars.

Today, the parameters of the Earth's orbit are known to very high precision in terms of things like the kilometer, but that wasn't always true. And having probes far enough out to have meaningful light-travel delays is even newer.

Today, the AU is defined in terms of the meter, and the meter is defined in terms of light travel, so in a sense we actually do measure with light travel times. We just do it in a weird way for historical reasons.

3

breckenridgeback t1_j20chmx wrote

> These are the colors our eyes see.

No, they aren't.

The three cones in your eyes respond most strongly to deep blue-violet, green, and yellow-green. But they see a distribution of colors around those peaks.

What you're getting at here is the idea that color is about tristimulus values, not the spectral power distribution. And that's true, at least under the assumption of humans with normal color vision.

But red, green, and blue light are not the tristimulus values, and don't cover all possible stimulus values that can be produced by a spectral distribution.

1

breckenridgeback t1_j1vkkgu wrote

I don't know the answers to these questions. I'm not a professional graphic designer. I do know that Photoshop and other tools support working in different color spaces, wider than those that can be displayed on the web (which uses sRGB as a standard, covering only about a third of human color vision). Some very high-quality monitors support a very wide gamut of colors, and I would assume (but don't know) that those are used for exceptionally high-fidelity graphic design work.

2

breckenridgeback t1_j1vg8yr wrote

Well, one, not everything can be represented by RGB. The RGB color gamut (the colors you can produce by mixing pure red, green, and blue) does not even close to cover all possible colors. There are many colors, particularly the richer shades of teal, green, and greenish-blue, that can't be displayed that way. More generally, no finite set of primary colors can produce every chromaticity (combination of hue, which is what 'type' of color it is, and saturation, which is how intensely colored). Such a finite set would produce some straight-sided polygon in the space of possible colors, which can't represent the smoothly curved available space (and, in practice, such a set would also require maximally saturated colors, which real dyes and the like don't produce).

For two, since different purposes use different mixes of pigments, the spaces each thing can cover vary. Your printer colors, for example, don't align with the colors your monitor can produce, because printers are using subtractive primaries (which absorb light) rather than glowing colors in the monitor (which add light). One common color space for printers is CMYK (for cyan, magenta, yellow, and key [i.e., black, used to darken colors]), and you can see that CMYK and sRGB have different available colors.

And for three, different monitors and other forms of display show things differently. If you want to be able to design a shirt on your computer, then reproduce it in fabric dyes, you need to understand the relationship between those two color systems.


Which brings us to pantone. Pantones don't actually represent any specific mix of pigments, like RGB or CMYK. Instead, they represent an abstract idea of a color that can be consistently represented across different methods of displaying one. Each pantone has representations in RGB or CMYK or whatever else, provided that the color it represents is inside their gamuts, but the pantone is independent of those specific representations.

It's kind of like the idea of the number two existing separately from the symbol 2 (used to write it in Arabic numerals) or the symbol 二 (the Chinese character for this number), or tally marks like ||, or the spelling t-w-o. These are all representations, appropriate to specific situations, of the abstract idea of the number two.

In practice, using pantones lets you design "in pantone", and then implement that design across a wide range of possible materials and means of producing color. Each pantone can be handled consistently, and then implemented in whatever means of producing color support that pantone in their gamuts, so that purple on your screen and purple on a printed page and purple on a shirt all look exactly the same.


EDIT: Hello, /r/all. Before you feel super smart and go "um a 5 year old wouldn't understand that" you should read the sidebar:

> LI5 means friendly, simplified and layperson-accessible explanations - not responses aimed at literal five-year-olds.

403

breckenridgeback t1_iy83dax wrote

I assume the word you wanted was "create".

Inventing a new form of math usually means starting with some new idea for how to think about things, seeing if that idea produces interesting consequences, and then trying to convert that idea into fully mathematical language.

In the case of calculus, we can follow some steps to "invent" it.

So. Let's imagine that you're in a car that is initially moving at 5 meters per second, and smoothly accelerates over 10 seconds to 25 meters per second. (That is, it has a speed given by v(t) = 5 + 2t.) We want to know how far you actually move in those 10 seconds.


Without a formula or some existing math, this isn't an obvious answer. We know how fast you're moving at any given time, but the distance you move is your speed times the amount of time you're at that speed. In this case, you're only "at" each speed for an instant as you speed up, so that idea won't work, at least without modification.

Okay, well, let's try a different approach. Instead of modeling the way you move continuously, let's imagine you're moving at a constant speed for each second, and use that to get an approximate answer. That way, we can compute distances for each second. So we approximate that you're moving at v(0) = 5 m/s for the first second, v(1) = 7 m/s for the second second, and so on. Since you're traveling at each speed for 1 second in this approximation, the distance you travel is 5 m/s times 1 second = 5 meters in the first second, 7 meters in the second second, and so on. That gets us an approximation of 5 + 7 + 9 + ... + 23 (we don't get a 25 here because we only reach that speed at the end), or 140 meters.

Now, we know that during each second, we're traveling faster than we were at the beginning of that second. So the approximation we just did was always underestimating our speed, so that 140 meters is a lower bound. We know we went at least 140 meters. That's a nice thing to know! But it's not a complete answer: exactly how far do we move in those 10 seconds?


Well. The error in our estimate comes from imagining that our speed is constant for a whole second. What if we didn't do that? What if we just imagined our speed was constant for half a second, instead? That is, we travel at v(0) = 5 m/s for the first half second, v(0.5) = 6 m/s for the second half second, v(1) = 7 m/s for the third half second, and so on. That should be closer to the real answer, because the approximate speed we're using is tracking more closely with our real speed. That gets us 2.5 meters of travel in the first half second, 3 in the second half second, 3.5 in the third, and so on, for a total of 145 meters.

For the same reason as before, we know this 145 meters is a lower bound. We must be going more than 145 meters. But we're still not exact.

Okay, what if we made those time windows really, really short - only, say, (1/100) of a second? Then we travel at v(0) = 5 m/s for the first hundredth of a second, v(0.01) = 5.02 m/s for the second hundredth of a second, and so on. We have a lot more adding up to do this time (because we now have 1,000 time windows to add up), but if we do add it up, we now get 149.9 meters.

Hmm, okay, what about 1/100000 of a second? Now we have to add up a million little steps, but we get 149.999 meters.

We might, at this point, suspect that the true answer is probably 150 meters. After all, we know it has to be more than 149.999, and we know that 149.999 is probably really close.


We don't know that 150 is the correct answer. But we've developed a technique that suggests that it might be. Can we make that technique better?

Well, one thing we can do is get an upper bound on the true distance using the same technique, by using the speed at the end of each window, instead of the beginning. That is, if we're using one-second-long time windows, we could assume we're going 7 m/s for the first window rather than 5 m/s. That way, we're always estimating a faster speed than the real one.

It turns out that if we do this, and use really short time windows, we end up with 150.001. So we know our true distance is between 149.999 and 150.001. We should really suspect it's probably exactly 150 at this point, particularly since the window between the lower and upper bounds is shrinking the shorter we make our time steps.

Once we put this idea into formal mathematical language, we can show that both the lower and upper bounds get as close to 150 as you could ever want, and the only number that fits between them is exactly 150 - the true answer. But that formal mathematical language turns out to generalize to a lot of other situations, and it's that idea that leads you to start really developing calculus.

4

breckenridgeback t1_iy7jb8x wrote

Often, but not at all always unless you intentionally build it that way.

The Golden Gate Bridge, for example, extends from a hill on the northern end of San Francisco (elevation a little over 200 feet) to rugged high hills/low mountains on the southern end of Marin County (which top out at 800-1000 feet). To make it level, it has to target a specific spot on the Marin coastline, then go through a tunnel.

1