Sez the Economist:
For, according to Dr Pyykko's calculations, relativity explains why tin batteries do not work, but lead ones do.
His chain of reasoning goes like this. Lead, being heavier than tin, has more protons in its nucleus (82, against tin's 50). That means its nucleus has a stronger positive charge and that, in turn, means the electrons orbiting the nucleus are more attracted to it and travel faster, at roughly 60% of the speed of light, compared with 35% for the electrons orbiting a tin atom...
If the problem isn't immeadiately obvious to you, pause a moment before proceeding over the fold.
So: this is one of the classic problems of classical physics. Accelerated electrical charges radiate electromagnetic radiation. If electrons really whizzed round nuclei, they would radiate and their orbits would decay and all matter would collapse. This doesn't happen. [[Atomic orbital]] looks reasonable and corresponds roughly to what i think I know.
I'm hoping that the Economist has garbled Dr Pyykko's work, rather than Dr P having garbled reality. But since it is in Physical Review Letters I presume it is of high quality.
* Phys. Rev. Lett. 106, 018301 (2011) Relativity and the Lead-Acid Battery (paywalled)
* Relativity and the lead-acid battery - arXiv (ht BD)
My guess is that it's a question of whether relativistic corrections are involved, or whether non-relativistic quantum mechanics suffices. Presumably Pyykko isn't committed to the electrons' having actual trajectories, and presumably he isn't committed to the validity of classical EM's account of charges producing radiation in atoms. (But I'm not going to take time to read the article, or to pull out my old atomic physics texts.)
Well, batteries work because of the electrochemical reactions, studied by physical chemists. Just what happens is a matter of determining all the outer quantum mechanical orbitals, not so easy before the days of fast computers.
Even if the Rutherford model of the atom where correct, Dr Pykko's explanation as expounded by the Economist would be incorrect.
It is only the outer shell of electrons that is involved in chemical reactions, and hence in battery storage of energy. As lead has two more shells electrons between the outer shell and the nucleis than does Zinc, there would be less atraction between the nucleis and the outer shell in lead than in zinc.
FWIW, you do have to make relativistic corrections to calculate the energy levels of lead, roughly because the attraction of the nucleus for the electrons is stronger. The corrections are most significant for the innermost electron shells, but the effects cascade outwards.
In the long run this makes a small difference in strength with which the outermost electrons are bound, and those are the ones which take part in the battery reactions.
The conclusion is nonsense as anyone who has used a lithium ion battery (no relativistic correction needed there) could tell you.
I don't see the problem. Pekka Pykko is a theoretical chemist, as am I. He is also one of the world's leading experts in relativistic effects in quantum chemistry, which I am very much not. Nevertheless, he and I speak the same language, and this really is how we theorists talk when we discuss research problems. Quantum mechanics assures us that we don't have to worry about accelerated electrons in the 1s state radiating (because they have "no where to go" in energy.) That settled, we can define operators for velocity and acceleration in quantum mechanics, and these operators obey classical-like equations of motion (the Heisenberg equations.) And classical scaling arguments gives us an easy way to interpret the demonstrably huge effects of relativity on the quantum states of the core levels of heavy atoms, which do indeed propagate out to the valence states. The effects are NOT small - relativity explains why gold is yellow and why mercury is a liquid.
[I'm quite prepared to believe that relativistic effects matter. And also prepared to believe this is a language issue, rather than one of substance. we can define operators for velocity and acceleration in quantum mechanics, sounds reasonable, but how do you interpret them? -W]
From the same Wikipedia entry linked in the post:
"For elements with high atomic number Z, the effects of relativity become more pronounced, and especially so for s electrons, which move at relativistic velocities..."
[Oops. Well, you can't trust wiki for everything. I'll draw that to their attention -W]
"Orbit" is the wrong word, but I believe the basic idea of the quoted part of the Economist article is otherwise valid. I don't know enough about the nitty-gritty of relativistic effects in chemistry to judge how valid the modeling is (especially not from a news article), but chemists do use relativity to explain the properties of heavier atoms.
I don't see as lithium has much to do with it. The article doesn't say that relativity is necessary for an effective battery, it says that relativity is necessary to explain the different behaviors of lead and tin. Lead and tin would be expected to behave fairly similarly due to their being in the same column of the periodic table, and especially since tin is only one spot above lead. There are, of course, things other than relativity that will make the two elements different, but I wouldn't be surprised if relativity did indeed have some impact in this case.
The actual paper: http://prl.aps.org/abstract/PRL/v106/i1/e018301
[Thanks. Added as a ref. I haven't read it, cos it is paywalled (plus there is a good chance that I can't). I note that Dr P is 3rd not first author, as the Economist had implied -W]
There's a copy on arXiv.org, submitted 2 days prior to reception by Phys. Rev. Lett.
[Thanks. I stumbled my way through it. All the substance never mentions speed; only at the end do they say Concluding, the lead-acid battery belongs to those familiar phenomena, whose characteristic features are due to the relativistic dynamics of fast electrons when they move near a heavy nucleus. In this case the main actors are the 6s valence electrons of lead, in the substances involved -W]
I don't understand the problem with the description.
Yes, it's technically incorrect because electrons do not move in the classical sense. However, it's easy to grasp and is sufficiently close to reality.
You don't need to call their attention to it; it's right. Just because electrons don't follow defined orbits doesn't mean that they aren't moving.
Alex - I agree. Other than using the word "orbit," the description sounds correct to me.
This post is quite ironic, because it is a perfect illustration of a standard tactic in the global warming deniers' arsenal. Take a description of science in the lay press. Combine this with a little popular science level knowledge of the subject - in this case that electrons don't literally move in orbits in the classical sense. Add a misplaced emphasis on the "error" you've discovered, and finally end up with the sentence, "I'm hoping that the Economist has garbled Dr Pyykko's work, rather than Dr P having garbled reality." With the word "hoping" you manage to imply that there is some probability that the expert in the field is actually incompetent to the degree that he/she has less understanding of the subject than you or your reader.
There are various lessons in this.
1) I suppose the Economist writer can be faulted for bit explaining quantum mechanics, and maybe there could have been another sentence to say this was a an oversimplification that ignored quantum mechanics. But after all, even though electrons may not be little marbles whizzing around, ekectochemustry produced lots of good results before quantum mechanics came along.
Actually,given the difficulty of getting good science reporting, somebody with relevant credibility might point out the not, buy mostly praise the writer for explaining an interesting result in a way appropriate to the audience.
2) Did everyone read the full Economist article before commenting, or just the out if context excerpt?
(I didn't think so, did you Eli? :-) Lithium is mentioned, appropriately.)
I suspect that the article is explaining why lead-acid batteries work but tin-acid batteries would not. The lead acid battery is based on Pb (oxidation state 0), PbSO4 (ox +2) and PbO2 (ox +4).
Being in group 4, the valence electron configuration is s2p2. Removing two electrons from the p gives the +2 oxidation state. The energy level of the s (which in turn is contracted by relativistic effects) affects how easy it is to remove them (inert s pair effect) to form the +4 state.
Dr Pyykko would be solving the Fermi-Dirac equation, which very few in the world can do as accurately, he only has 40 years of experience.
W's objection to the language used would apply to the description of a nonrelativistic H atom, and the lack of bremstrahlung radiation from accelerating electrons in orbits was indeed a puzzle. The solution was to say they can't, because they can't have just a little less energy, only certain energies allowed. I think the easy way to think of the velocity is being related to the kinetic energy, which has an average that can be computed.
John Mashey guilted me into reading the Economist article - as a popular science article it was quite good.
A somewhat clearer explanation
For those so inclined, the PRL paper in question is available at:
The Economist article is not especially bad, but it falls into the usual trap for science articles in the mainstream press, which is an inability to accept that science is almost always incremental. In the present case, they state, "Dr Pyykko and his colleagues made two versions of a computer model of how lead-acid batteries work. One incorporated their newly hypothesised relativistic effects while the other did not." The relativistic effects in question are so far from being "newly hypothesized" that they actually used commercial software to do the calculations in the paper. I know of at least two downloadable open-source codes that can do the solid phase calculations with the relativistic effects. It is cute that relativity is important for such an everyday application as batteries, but the idea that relativity is important for this kind of calculation on lead has not been novel for decades.
I'm harping on this because I think this is a case of a general phenomenon that is really harmful to the public's understanding of science. When the mainstream press writes about science, they have incentive to play up the novel aspects, and so they overplay the significance of that small fraction of papers that they decide to cover. The cumulative result is that readers get the impression that science is lurching from one fad to the other, and is fickle and unreliable. Think of all the articles on diet and health, for example. People soon get fatigued, and are unable to distinguish the big picture things that science is confident about (cigarettes cause cancer) from the speculative and marginal. They soon conclude that scientists think that everything causes cancer, so it's hopeless to try to act on the basis of scientific evidence.
[Yes, I agree. As you say, this isn't very novel - the only novel aspect is that they actually bothered to do the calculations. And I very much agree with what you say about the misleading impression that is given to the public -W]
I would claim The Economist was not too bad at this, and I just hope that if anyone writes letters, that if they complain about nits they also praise them for getting it mostly right and explaining it at a reasonable level. Compare this with Forbes, lately. Sigh.
Side note: RC has a Bore Hole, and Eli a Rabett Hole, to good effect. WMC doesn't get as much of the junk, so may not need one, but if there were an equivalent, what would it be called? Stoat burrow? Stoat den?
[I did wonder. MT doesn't seem to have a "move this comment" facility, and I can't be bothered with cut-n-paste -W]
Although The Economist article might get over the message that relativistic effects are important in every-day consequences (hence John Mashey's "not too bad"?, despite conceptual problems with 'orbits') it does abuse the way science and technology developed in this context. It is, as Carl Greef points out, presenting a totally distorted picture of what is new (merely the ab initio calculation of lead's electrode potentials using standard methods, at which Prof Pyykko is expert) and its implications, excitedly claiming "That is an extraordinary finding, and it prompts the question..." For anyone who has a glancing acquaintance with the topic it is old news - I recollect a 20-year-old article on the topic in a Chemical Education journal... - although a pleasing illustration. As for doing ab initio calculations to find out new relativistic technologies......
The press often newly discovers things about science or technology long known inside their fields, but not well-known to the general public. I went through sone arguments about such over at Collide-a-Scape about serendipity and the Penzias/Wilson discovery of cosmic background radiation. Why, I've had journalists discover that software sometimes has bugs :-)
But if anyone would actually like to see science reporting improve for a general audience (although of course The Economist is hardly the mass market), carping about it on random blogs is not very useful. I've written before about practical things one can do.
Here's something that might actually be constructive:
1) Having read the paper and the full Economist article, and thinking about The Economist's readership and the constraints of the magazine, draft a short letter, and post the draft here.
2) Get comments and iterate.
3) if I were doing it, I'd generally praise the article for getting the general ideas right, then make a few suggestions for getting it better, but if course, if people feel that the article was so bad they should focus 100% on its terrible flaws, and we should drive such junk out... Go ahead.
4) My personal opinion is that writing good science or technology pieces for a general audience, under the constraints of weekly/daily publication, is extremely difficult, to the point where I'm ecstatic to encounter anyone that knows anything and us even trying. Improbably send emails to reporters a few times a month both encouraging them and sometimes pointing out things they might have missed. I bug The Economist on occasion, although this topic isn't mine.
It has given me another useful example (along with GPS) where relativity has clear effects in everyday life.
[I left them a comment on their blog. Hopefully they read the comments there. GPS: by odd co-incidence, someone was talking about that at work just last week -W]
Carl G.: "Think of all the articles on diet and health, for example. People soon get fatigued (...)"
Confused would be better the fatigued IMHO, noting that for the last two weeks the up-front new release display at my local Barnes & Noble (some dozens of titles) has been almost entirely composed of diet and health books.
WMC: good to hear.
MT: no move comment: too bad, but blog software is still in its infancy.
Electrons do move, so why can't they whiz? How much of an electron is vibration?
[QM is funny stuff, but interesting. See the wiki page I linked for some -W]
> where relativity has clear effects in everyday life
What about Maxwell's equations? They were relativistic, i.e., Lorentz invariant, before anyone even knew what that was. I suspect that if they were not, we would be observing some really weird EM phenomena here on our moving Earth...
Martin: likely so, which tells me I should have better explained my use of the GPS example, and I guess really means the lead-vs-tin batteries example is different.
When I mention GPS, the context is:
1) Science evolves by creating models that ate successively better approximations to reality.
Of course, sometimes simple models are still good enough approximations or useful enough within some range.
2) For example, Newton's Laws of motion are pretty good within the range of speeds found in Earth.
But GPS doesn't work without Einstein (correction for time dilation).
This feels different to me than lead-acid batteries or many other products built on Maxwell's, which have to be consistent, but were easily buildable with no idea of relativity, whereas for GPS, it seems necessary. I suppose it's possible to imagine a counterfactual history in which turnsole dilation effect is only discovered when they put up GPS satellites and wonder why they don't work right:-)
Maybe there are better examples of technology that people use widely that are not only consistent, but actually depend on being designed with the understanding of relativity.
I wrote the Economist piece, and a colleague pointed me to this thread.
[You should read the comment on the on-line article, since I linked to this there several days ago! Otherwise, thanks for commenting. It is clear from reading the other comments on the Economist article you wrote that there is a huge spectrum of degree-of-knowledge and you can't write for all -W]
I, of course, appreciate the criticism that electrons don't really orbit the nucleus. I have a PhD in physics myself and would gladly have explained the research in terms of orbitals, wavefunctions, and density of states above the Fermi level. But you have to remember that we're writing for a general audience. It's more important to get the essence of the idea correct than to try to explain all of quantum mechanics in 600 words. You lose the reader before you even get to that point.
If you think about what's really happening (electronic wavefunctions with increased probability of being closer to the nucleus from the relativistic corrections and a lowering of the energy of the free orbitals just above the Fermi energy), I think the description we published actually does a pretty decent job of getting the main points across. Except, of course, for thinking about orbits rather than probability densities.
The point here was to communicate that relativity actually comes into play every time people start their cars. I suspect that people really do find this neat. There aren't that many other day-to-day applications of special relativity. For the record, Pyykko read through the article and thought the explanations got the main points right.
As for the phrase "newly hypothesized", that could have been better. It was newly hypothesized that the relativistic effects would matter in the lead-acid battery, but, as has been pointed out, these effects were well known. They just hadn't been applied to this system before. Novel applications of old techniques is still good science.
[This comes back to the problem about "news". Clearly, a very large part of your readership doesn't know that electrons whizzing around atoms is a badly oversimplified picture. There is an interesting article to be written on that very subject, for such an audience. But, is there any chance of the E publishing it, since it wouldn't be "news"? -W]
Doubtful that an article on just that subject would be publishable, since it doesn't have a news angle. Pretty much for the same reason that a scientific journal won't publish an article that exactly replicates work done 20 years ago without some new twist, a new technique, or better precision. It's just not new.
[Yes, I think that is the problem with writing science for the likes of the Economist. Novelty is all, even spurious novelty. Simply telling people something that is new to them won't do :-( -W]
Thanks for commenting.
I did think it was pretty neat (although not as striking as GPS), but certainly an interesting place where relativity and quantum effects overlap.
I would suggest that this whole interchange is an instructive example for everybody in the issues if writing for general audiences and ways of getting constructive feedback into the process.
Chaos Theory has perhaps more involvement than relativity in climatology.
Chaos theory : yes, but the idea of models hierarchy is relevant.
What is models hierarchy?
In a models hierarchy, each successive model is a better approximation of observed reality, however the simpler ones may be much easier to understand, or more computationally tractable, and may well be "good enough" for some restricted domain.
1) Newton ... Einstein
2) Thomson plum pudding ... Rutherford ...modern models of atom
3) in electrical engineering, chip simulations:
Instruction simulator ... Gate level logic simulator
Arrhenius (simple calculations) ... 1980s simulations ... Modern GCMs
Of course, Relativity and quantum mechanics aren't a hierarchy, why I guess Einstein sought a Unified Field Theory, or "one theory to rule them all."
> chaos theory
".... the question Lorenz asked at a 1979 meeting, "Does the flap of a butterfly's wings in Brazil set off a tornado in Texas?" (Already in 1975 a science journalist had asked, "can I start an ice age by waving my arm?") Lorenz's answer â perhaps yes â became part of the common understanding of educated people."
So the 1970s "ice age" notion is explained -- armwaving!
Avoiding the carbon tax is understandable. It is dry and contentious. Even at its best, it is a game played on a crooked field.
At a talk at the American Association for Aerosol research last October, James Flemming (author of Fixing the Sky: The Checkered History of Weather and Climate Control, http://tinyurl.com/49o2xz7 ), said that he once asked Lorentz if a butterfly really could cause a Tornado, and Lorentz responded, "sure if it was the size of the Rockies". Not sure if Flemming or Lorentz dubbed it the "Mothra Effect."
Very very few small scale perturbations are not quickly dissipated in any turbulent fluid system.
[People seem to believe this but it isn't true; at least, it isn't true in climate models. Whether it is true in the real world is another matter. See http://www.realclimate.org/index.php/archives/2005/11/chaos-and-climate/ Don't forget to follow through to http://mustelid.blogspot.com/2005/10/butterflies-notes-for-post.html
So: all disturbances grow, in the sense that the solution turns out to be completely different to what would have happened otherwise. But finding a small perturbation that leads to a hurricane in the area you want at the time you want is difficult to impossible -W]
Nosmo, if you follow that link to Weart's book, he invites comments --- I bet he'd thank you for that anecdote and cite if you'll send it to him.
OK, I knew I was going out on a limb with the statement "very few small disturbances..."
It been too long since I studied this stuff, and then it was in stationary turbulent systems not the atmosphere. I half talked my self out of this, but here goes anyway...
Consider and extreme case: A perfectly calm day, with no measurable wind. Locally the air is not chaotic let alone turbulent. Seems to me any disturbance of the size of a butterfly flapping its wings would dissipate well before it propagated out of the region of calm air. I guess the resultant thermal fluctuations will eventually result in effecting some area of the planet where the perturbation would grow, even if the disturbance was much much smaller then the local dissipative length scale (Kolmogorov Microscale). But I see this as a second order effect. By this logic, I guess farting in my closed office, or turning on a flashlight at night could also eventually "cause" a tornado.
[I think you are right, in your last sentence. There is dissipation, yes, but it never completely removes any change, and sometime it is going to grow, in a chaotic-type system -W]
If the flow is turbulent then yes small perturbations will by definition grow and while I agree that a butterfly could eventually "cause" a tornado, I took Flemming's story of the Lorentz quote as meaning this happens very rarely, and that the real world is a lot more subtle then common interpretation of the "butterfly effect".
I'm not yet ready to believe Lorentz was misquoted.
Given the resolution of GCMs Eli has no doubt that a "small" effect averaged over the resolution element is a Mothra
[My recollection is that it was more like a vulture. But you can work it out :-). Either way, it doesn't matter. I could have reduced it further; all that happens is that it takes the initial perturbation a little longer to grow to "visible" size -W]
Or die out. Odds on Eli
[Why should it die out? Information is not lost.
Are you claiming that GCMs are so unlike the real world? -W]
I recently read a paper on stable (surface) water waves, a part of mathmatical continuum mechanics. Turns out that much is still unknown. For example, there are no solutions for waves in a finite depth basin except over a finite interval of time (which is quite long). The author comments that it is thought that such waves eventually dissipate but there is no proof.
[That sounds odd, I can't see why extending the solutions out to infinity i time would make it harder to find a solution. And of course, in an infinite horizontal basin of finite depth there are infinite-time solutions -W]
Of course water is composed of atoms and whatever actual difference that makes is ignored.
[Or indeed protons. Or quarks. Or... -W]
Of course information is lost in a machine with a finite memory and a finite number of bits in each representation.
[Oh dear, we are getting off topic. I was talking about tthe real world - or at least, the representation of the real world by non-linear PDEs -W]
William --- I stated the matter correctly. There are no mathematically rigourous infinite time solutions for the general case:
Walter A. Straus, Steady Water Waves, Bull. Amer. Math. Soc. 47:4 (Oct 2010), 671--694.
"Short-time existence for the Euler and Navier-Stokes equations ... is well knwon... Global existence, that is, the problem of solving the quations for all times in [0,infinity) is much more subtle. In two dimensions the problem was solved for the Euler equations ... but in three dimensions it is still open." That's even without vorticity.
[It does depend on which equations you mean, of course. There is, though, something of an ambiguity in your quote: is it talking about a proof of an *existence* of solution, or of actually providing a solution? -W]
I was hoping my comments would generate more discussion. So I'll try again.
There are some fluid systems which are clearly chaotic, such that infinitesimally small perturbation grow exponentially. Others such pipe flow require finite perturbations to grow. At low Reynolds numbers the flow will be laminar, but as it is increases smaller and smaller perturbations are required to trigger turbulence. (If my memory hasn't failed I recall that the record of laminar pipe flow was Re=10,000, although usually it occurs at somehting line 2000 in smooth pipes). In other systems perturbations are damped out. I think there may also be systems with regions of stability where some perturbations are damped out. In simple chaotic (and other) systems , an infinitesimal perturbation will lead to exponentially diverging solutions. In these cases any perturbation (that is not precisely aligned with the Lyoponov vector) will grow. This is the case for the Lorentz attractor and the equations used in the climate models. However, in my example above of a butterfly flapping its wings on a very calm day, the perturbation will fist decay, and one is left with a very small thermal effect at the molecular level. This is fundamentally different from the standard interpretation of sensitive dependence on initial conditions.
The atmosphere is clearly not homogeneous, isotropic nor stationary. On some scale and in some places clearly it is chaotic and perturbations will always grow. But I just don't see how this is always the case.
Is there something wrong with what I wrote?
I don't have a feel for the state of the atmosphere and how often perturbations grow. Does it take very special circumstances for a butterfly to grow before decaying or is it usually the case? I would guess that most of the time it would decay.
[Perhaps it helps to think in non-fluid dynamics terms. If we start with a pencil exactly balanced on its tip, then clearly any tiny perturbation leads to it falling one way or another: a small initial perturbation grows. But if we have a perfect frictionless pendulum and perturb it, then it merely retains that information. And a damped pendulum will stop, at infinity, and before then gradually decay away any perturbation. But the perturbation is still there, just damping away; and if "activated" by being part of the initial conditions for some chaotic system, then it will matter. This assumes that everything stays above the level where molecular effects matter -W]
The Cauchy problem of a system of pdes is two-fold: (1) existence and (2) uniqueness for arbitray initial conditiions @ t=0 and for some half open interval of time begining @ t=0. To a mathematician, demonstrating existence is solving the equations. Even for the Euler equations this is open in three dimensions for all but special cases which require only a finite interval of time over which the solution evolves.
From the same reference, "For the Navier-Stokes case ...[with viscosity] it is one of the "millennium" [prize] problems." So solve it in you spare time and you'll be $1 million richer. :-)
Lead battery paper.
I have not caught up with it yet so the following reminiscence may be just as irrelevant as some of the above comments.
This attracted my attention because I once shared a calculation (with G.J.Morgan) of the energy gap of lead tin telluride. It was a long time ago, but as far as I remember the biggest * contribution was the relativistic effect which applied to the s component of the wave function at the band edge, i.e. the component whose magnitude remains significant close to the nuclei of the lead atoms.
By the way, someone above refers to the difficulty of solving the Dirac (not Fermi-Dirac) equation. It isn't always necessary, I used a simplified two component version of it. There were two terms of similar magnitude, first the mass-velocity correction which is self explanatory and secondly the Darwin term (discovered by yet another member of Darwin's family) which is caused by the interaction of quantum and special relativistic effects.
* The different sizes of Pb and Sn also matter.
[i] [.... But the perturbation is still there, just damping away; and if "activated" by being part of the initial conditions for some chaotic system, then it will matter. This assumes that everything stays above the level where molecular effects matter -W][/i]
But the story from Flemming indicates that Lorentz thinks they usually don't. If the time constant of the decay for a butterfly flapping its wings on a calm day is on the order of seconds, it won't take very long for it to get down to the molecular level.
I understand what you have said. I am just asking what usually happens, rather then what can happen. How unusual is it for the flap to grow with out decaying first, does it happen most of the time, does it just take a typical gusty day, or is it almost impossible?
[Ah, I see. In that case, the answer is "don't know". However, getting the decay down to molecular level doesn't rescue you, it will still affect the molecular wigglyness; billiard balls bouncing off each other is probably chaotic too, and lets not get started on the quantum stuff -W]
[With apologies for the length.]
we can define operators for velocity and acceleration in quantum mechanics, sounds reasonable, but how do you interpret them? -W]
I might well have missed the point but see no problem. The basis states |v> have unique velocities v. The actual states are a mixture of these. The expected value ev(v) is proportional to the observable electric current. Even if ev(v)=0, (as e.g in atoms) its standard deviation sd(v) will be non-zero (uncertainty principle) so that large values of v will be present enhancing the mass.
I am not sure that 'orbits' should be banned except in elementary accounts. They are basic to the Feynman path integral approach to QM. Its just that the electron tries an infinite variety of them.
E.M. radiation has been dealt above except for this possible transition:
non-relativistic => relativistic caused by
p-type excited state => s-type ground state
which is probably nothing to do with batteries.