What, with all the attacks on science and scientist these days, we may not want to be focusing on those times when science goes off the rails and makes a huge mess of things. But, science at its best and scientists at their best, will never shy away from such things.
Dr. Paul Offit just wrote a book called Pandora's Lab: Seven Stories of Science Gone Wrong, which not about an evil black dog that escaped from a box, but rather, seven instances when the march of scientific progress headed off a cliff rather than in the desired direction. People died. Many people died. Other bad things happened.
Note: I interviewed Paul Offit about his book on Atheist Talk Radio. This interview will be aired on Sunday, May 28th, and will be available as a podcast. It should be HERE.
Readers will have different reactions to, and ways to relate to, each of the seven different stories, because they are far flung and cover a great deal of time, diverse social settings, and a wide range of scientific endeavors. Some readers will get mad because he talks about DDT and Rachel Carson, though I assure you his argument is mostly reasonable (I did disagree with some parts). All readers will be amazed at the poppy plant and all it can do and has done, and astonished at the immense apparent ignorance displayed by that plant's exploiters, from back in the early 19th century to, well, yesterday. Those interested in race and racism, the use of poison gas to kill people, will find things you didn't know in Offit's carefully researched histories. Also, don't forget to take your vitamins. Or, maybe, forget to take your vitamins.
The chapter "The Great Margarine Mistake" is a great example of the very commonly screwed up interface between food science, food production and marketing, and the shaping of food preference among regular people. You know, that thing where "They tell us not to drink coffee. Then they tell us to drink coffee. They don't know nothin'"
My biggest disagreement with Paul is over malaria. He did not incorporate an often overlooked fact about the disease into his discussion, and had he done so, may have written a somewhat different chapter. Briefly, in zones where there are two wet seasons (or one long wet season and a very short dry season) there has never really been success in curtailing malaria. In zones where there is a very long dry season but it is wet enough for part of the year for the mosquito that carries malaria to exist at least most years, malaria is relatively easy to beat down using a wide range of techniques, no one of which is supreme. So, for example, today, the distribution of malaria in South Africa, where it is not actually that common (thousands of cases in a normal year among tens of millions of people) is determined mainly by how wet the eastern wet season is, integrated with the movement into that area of people, usually refugees, who are a) infected and b) not getting medical treatment. (See this.)
Malaria was wiped out in country after country prior to the use of DDT, then the DDT came in and helped a great deal, in those relatively dry countries. But the wet countries, not so much. Indeed, in a place like Zaire, there are absolutely no reliable statistics on how common Malaria is or ever was over most of the country, but when I lived there in the 1980s, it was as common as the common cold in New Jersey, and DDT was theoretically in use. (That is a second correlation with causation: the wetter the equatorial country, the less we actually know about disease. I recall leaving the deep rain forest to visit the "city" to get hold of a few courses of leprosy medicine for a handful of people who visited our clinic who had it, where I had dinner with a guy from the UN who was on his victory lap for having wiped out leprosy in Africa.)
In some ways, Offit's final chapter is the most interesting, the eighth chapter (combined with the Epilog) in which he does two things. One is to identify the kind of reasoning mistake, or methodological mistake, each of his seven examples exemplifies. Such as failure to pay attention to the data, or failure to pay attention to the man behind the curtain. The other is to go quickly through what may end up being similar stories of science gone wrong just starting to brew today or in recent decades, such as the long term unintended effects of widespread use of antibiotics.
A question that Offit's book raises, indirectly, is this: When a Pandora-like box opens and some sort of monster creeps out, why did the box open to begin with? Sometimes it is jostled open, like in the case of unintended negative outcomes from the use of antibiotics. Sometimes it is opened because someone can't resist the treasures that may be inside. Sometimes it is opened because science is an open process and must always seek knowledge etc. etc. I wonder if the recent development of an engineered polio virus (three instances), or the Spanish Flu, is an example of such. Sometimes it is opened because of (Godwin Warning!) HITLER. Seriously.
I don't know what knowing these reasons gets us, but one possibility is this: when we find ignorance as a root cause of calamity, perhaps an appreciation of knowledge is gained. That is certainly the lesson of Offit's review of the products of opium, their invention, intensification, deployment, and use. Apparently addiction was simply not understood at all until fairly recently, and that lack of understanding caused science, medical technology, and medical practice to do the exactly wrong thing over and over again.
And of course, lobotomies. The invention of the latter method of doing this useless and horrible procedure is something that, if put in a movie as a plot element, would kill the movie because it is not possible to suspend disbelief to the degree necessary to stay seated in the theater.
Pandora's Lab: Seven Stories of Science Gone Wrong is a great read and a necessary addition to the bookshelf of any practicing skeptic or science enthusiast.
Paul Offit, who is a pediatrician and the inventor of a rotavirus vaccine (see this for an interesting podcast on a related topic), is the Maurice R. Hilleman Professor of Vaccinology and Professor of Pediatrics at the Perelman School of Medicine, University of Pennsylvania. He is also chief of Infectious Diseases and director of Vaccine Education at the Children's Hospital of Philadelphia.
Aside from Pandra's Lab, he also wrote Do You Believe in Magic?: Vitamins, Supplements, and All Things Natural: A Look Behind the Curtain, Deadly Choices: How the Anti-Vaccine Movement Threatens Us All, and Bad Faith: When Religious Belief Undermines Modern Medicine.
Artificial intelligence may turn out to be the very worst case of science gone wrong. At http://discourse.numenta.org/t/free-will-volition-module and thereabouts we are rushing headlong laudably towards understanding the brain but perhaps foolishly towards creating AI.
Not really piqued by a URL that says "free will" in a "discourse" blog.
AI is entirely benign. What if your toaster had AI. What if it went bad. How would it go bad? Murder you by burning your toast all the time until you get a coronary? Your PC would do what? Decide to drop the USB packets that your mouse click made and make you lose your game of DOOM?
None of it makes any dent in free will, though. AI is orthogonal to that.And it's not going to teach us about the brain either, because the operation of the brain is in how it works, not what was put in after the fact. We can only program stories, reality doesn't do stories, it just executes one. And whether free will exists or not entirely depends on what you define as free will. Much like sound with the "tree in a forest" thing.
Skynet got rid of humanity because we gave it all the bombs. If we'd gotten rid of the bombs...
By way of background, back in 2005 I submitted the first Slashdot story -- https://tech.slashdot.org/story/05/03/24/1518224/palm-founders-form-ai-… -- about the founding of Numenta by the inventor of the Palm Pilot. Now twelve years later I stumbled into the Numenta hierarchical temporal memory "HTM Forum" while Googling for discussions of "How the Mind Works", because the 1997 Stephen Pinker book by that lofty title annoyed me with its failure to make good on its promise of explaining how the mind works.
Last night on 2017-05-25 I tried to comment in a "sensorimotor" thread, but the moderator split off my discussion into a separate "Free will Volition module" thread -- thus establishing my not-yet-D-Day beachhead into the meme-susceptible minds of the AI enthusiasts most eager to approach AI from the viewpoint of neuroscience. Here and at http://www.ai-forum.org/topic.asp?forum_id=1&topic_id=79318 I am trying to accelerate the emergence of AI by attracting talent and mind-share to the bottom-up Numenta project which may gain from my own top-down AI work.
What we desire from "free will" is the intelligent selection of goals without coercion or deception. Determinism Yes; detention, No.
Would some wanna-be please take over my AI theory and my AI software so as please to absolve me of any blame if things go terribly wrong? -Arthur
If this is the same Arthur T. Murray whose crap I've seen before, he is a long time, and well-known, crank, who doesn't have any formal training in AI or even computer science. He's learned about it "from reading science fiction".
At one time he pushed a "theory" of the mind that said, roughly,
The mind contains at its foundation a two-dimensional matrix where the columns represent senses (smell, touch, taste, etc.) and the rows represent time. As time passes, the brain stores the sensory input it receives in successive rows of this matrix.
Crank all the way down.
OK, you need to change your pitch. Woomancer words don't work on decent scientists and what you said was so laden with buzzword and salad that it's practically begging to be ignored.
You'll fleece the gullible with it, or attract the dumb's money, but the dumb money won't work, because you need the smart money to make it work.
So either you're fine with fleecing or you have the wrong pitch.
In response to "Dean" upthread from here, you are not presenting my own claims or my own words, but rather the twisted systerhood of my archnemesis who wrote a diatribe against me thirteen years ago. May I remind all here present that you are supposed to be "scientists" who reserve judgement and who deal in facts, not ad-hominem attacks.
In response to "Wow" directly upthread, there is no fleecing of anybody going on here, because the Mentifex AI project is motivated not by money but by curiosity. Now let's try something with the markup:
![Theory of Mind](http://ai.neocities.org/VisRecog.gif)
Sorry, your spiel indicates that you are either incompetent in the field or unreliable. If you're not trying to fleece anyone, you're wasting your time because at least if you WERE trying to fleece someone your pitch could have been improved and then maybe believed by investors smart enough to actually have a chance of succeeding.
But if you're not trying to fleece anyone then your spiel is clearly the best you've got and the venture is a waste of everyone's time, including yours.
It doesn't matter what your nemesis said or what dean said, your own words condemn the venture.
",In response to “Dean” upthread from here, you are not presenting my own claims or my own words, but rather the twisted systerhood of my archnemesis"
Sorry, that accurately represents the bullshit you were spreading when I first read your crap. It doesn't seem that a single thing has changed with your ignorant presentations.
Science may go terribly wrong with artificial intelligence, or perhaps AI will benefit humanity. This morning I have created a new web-page that I would like to share here while we are on the topic of Science Going Wrong. What is the formatting here for links? Please excuse any ignorance shown below:
So you're just here to advertise, then.
Mentifex, what I saw on your website might be suitable as an outline you make for yourself when writing a science fiction story, but it's not a theory of mind.
Wow, why would AI be installed on a toaster? AI - to the degree that it exists or will exist - would be used to make decisions, either regarding hardware or software. So, we see early AI helping to fly fighter jets, buy and sell stocks,operate remote drones like the Mars probes and military robots, that sort of thing. I can certainly imagine software capable of learning making disconcerting decisions with undesirable results.
Might as well ask why would you install it on a nuclear missile.
The point of the toaster was to show how the form the AI has to mechanically affect the world limits the ability of AI to be problematic.
But if you want to pretend serious, how about you have AI to do REAL browning control rather than a thermistor timer?
If you don't like toasters, then what about your TV? So it can direct your advertising to the right place. And how will it kill you? Turn to MTV and swap channels until you get an epileptic fit?
The point was also not about toaster AI but how bullshit the claim "AI is dangerous!!!" is, because if an AI were put inside a toaster, the claim it's dangerous is plainly ridiculous.
Maybe you just don't want to comprehend it, though. Maybe you can't.
Wow - So you're confident that the military robots the US has announced will never wield a weapon will actually never be armed? How about North Korean, Russian, or private robots? I can't help but notice that you ignored my specific examples. It's not as though we have no historical examples of unanticipated consequences or decision makers ignoring warnings. While Mars probes behaving badly would be unlikely to lead to Skynet, I can see, for instance, stock market programs capable of learning making very costly decisions which still do not rise to the level of Robot Apocalypse®.
I admit I am currently less worried about AI than I am global warming, which is not a case of science gone wrong so much as science ignored.
So kermit, you don;t know what confident looks like? Because there's bugger all that could indicate that in my post. So, yeah, it is the second option: you can't comprehend it, the situation is too hard for your poor little brain.
I'll start you like I do the more odious kids with education problems.
OK, so what do you think is the dangerous part about AI? Not guns, not weapons, AI.
"I resent the implication that I'm a one-dimensional, bread-obsessed electrical appliance."
For some reason, people refuse to heed the lessons of Red Dwarf. Clearly toasters are disruptive.
"I toast, therefore I am. "
@15 AI will make unexpected decisions. If those decisions are bad ones by our standards, there could be bad results. I am not worried that my toaster would make wrong decisions for me about how brown to make my toast, I am worried that AI which runs stock purchasing, military robots, medical programs, etc could make wrong decisions. And by "wrong" I mean a decision with which most or all humans would be unhappy with.
An abstract AI sitting alone in a thought experiment will not hurt anything, of course. But they are and will be attached to weapons, and our bank accounts, and transportation. I think AI connected to the internet will be the most dangerous, with the unpredictability increasing by orders of magnitude..
You claim that your post does not indicate confidence, yet you said "how bullshit the claim 'AI is dangerous!!!' is, because if an AI were put inside a toaster, the claim it’s dangerous is plainly ridiculous." I inferred that you were confident of safe military decisions etc. because I can't make heads or tails of your statement otherwise. I am a bear of very little brain, however, and perhaps I misinterpreted you. But AI isn't safe because it can be put into toasters; it's dangerous because it won't *only be put into toasters.
Guns, unloaded, and not used, are safe. But that's not their only circumstance, is it? I can see danger in genetic engineering, AI, cars, and the use of fire, without wanting these things to be banned, forbidden, or even discouraged. Any technology is safe if it's not used. Generally, tech is developed because it is already used or somebody anticipates it being used.
"@15 AI will make unexpected decisions. "
And why is that a danger?
No, again, not what's dangerous about guns or the metal of war, AI.
"If those decisions are bad ones by our standards, there could be bad results"
Uh, stock market crash?
"You claim that your post does not indicate confidence"
Yup. Remember you said I was confident that the future would have no future AI robots/terminators. The post said nothing about robots, only about AI.
What's dangerous about AI?
The only valid one so far is both null because cockups are done by humans so it's not an AI thing and also requires we decide an absolute right and proper answer. HFTs show that there's a shitload of argument about what is right and proper with the stock market, so you get good or bad not based on AI but on the moral codes of the person making the assessment.
What is dangerous about AI?
"Guns, unloaded, and not used, are safe"
No, they can be loaded. Or sold or stolen to be used in a crime.
#16 I forgot the RD toaster. Thanks for refreshing memory.
I thought of the drink dispenser in Douglas Adams Restaurant at the end of the universe.
“That drink,” said the machine sweetly, “was individually tailored to meet your personal requirements for nutrition and pleasure.”
“Ah,” said Arthur, “so I’m a masochist on a diet am I?”
“Share and Enjoy.”
“Oh, shut up.”
"Which leads me to the inescapable conclusion that Cylons are, in the final analysis, little more than toasters... with great-looking legs."
~ Baltar: Battestar Galactica
Terror at the breakfast table:
There is also Futurama episode "Mom's Day". Which is a world where they build robots that are insane or perverted or hobos.
Welp. I guess I can't condemn AI as a concept which is not being applied in any way, so I'll have to let it go.
I see myself trying to escape blame and censure for what I have created, while the villagers with their pitchforks and firebrands storm the castle where I am hiding out.
Don;t create it, then. If it's already done, undo it. Don't pretend you've created something when you have not. Don't fuck up and THEN ask forgiveness.
Apply those appropriate and cut out the fucking histrionics.
"Welp. I guess I can’t condemn AI as a concept which is not being applied in any way"
You haven't condemned AI at all, you've made strawmen to attack and argued about how weapons are bad. You can't condemn AI as a concept not because you are forbidden but because you cannot work out how, only have the desire to condemn it.
“Which leads me to the inescapable conclusion that Cylons are, in the final analysis, little more than toasters… with great-looking legs.”
Ew. Where do you put the bread???
"I see myself trying to escape blame and censure for what I have created, while the villagers with their pitchforks and firebrands storm the castle where I am hiding out."
You are confusing "storming" with "laughing hysterically" , which people are already doing at the bull crap you're producing.
Wow, @2 you said "AI is entirely benign. What if your toaster had AI. What if it went bad. How would it go bad? "
I allowed that to distract me, and I derailed the thread, I apologize to all . There is a difference, of course, between bad science and science that can produce bad results. AI development is coming along nicely, in fits and starts and detours and a few wrong turns, but the little I read news of it indicates that it's doing well, and already has real results. Sentient robots and such, if possible, will be here in due course.
Both the means and likelihood of our end coming from AI are unpredictable, and a rather different matter altogether.
Can AI get bored? I know I'm getting bored waiting for it.
"I allowed that to distract me, and I derailed the thread, I apologize to all "
Ah, no probs.
"Both the means and likelihood of our end coming from AI are unpredictable"
The means will be via things we let AI do, not the fact of AI itself. And to some extent we may find that AI is no worse than HI (Human Intelligence) at fucking things up, or that HI fucking up an actual AI implementation could lead to problems. But both of those are seen without AI, just automation.
There doesn't appear to be anything at all dangerous about AI in and of itself. All the problems appear either as a conjunct with some task it is supplied to do or its equipment.
The two best known examples of "bad AI" were both because of human fuckups. HAL "lost it" because it can't decide NOT to obey (see Robocop II as well, but there the human overrides the programming) but was given conflicting goals. And Ash in Aliens was likewise given an overriding task that caused the "best option" to achieve it to be "kill all humans".
They are all dependent on either the machinery being inherently weaponisable, or given full authority over safety critical systems.
If we were really worried about that, we could just not use AI in those roles.
Singularity here we come! I just spent six hours of Memorial Day 2017 coding the free open-source http://ai.neocities.org/ghost.txt artificial intelligence which Netizens may rename as ghost.pl and run immortally on their AI Lab computer. Watch the ghost in the machine think with concepts, then teach your fellow AI enthusiasts how to run the ghost AI so as to learn How the Mind Works.
“Guns, unloaded, and not used, are safe”
Heres some scifi.
Imagine if you can, an app thingie for phone.
Its called a death app.
It works by holding your phone in your hand
pointed at someone like a tv remote, pressing
a button on the phone, and a little lazer thingie
zaps the targetted person and kills em.
Neat app eh!
Rather EXACTLY like guns are now actually!
Theres no issue with people carrying death apps
around the place able to kill anyone in a 20 metre
radius is there?
Fucking dumb yanks with their cowboy scifi crap.
The point that is missed with the "If you ban guns, only criminals have guns" is that the criminals may be the only ones with guns, but they have fewer guns and don't need to carry them as a matter of course unless they're expecting to meet armed resistance (such as armed police or other armed criminals).
And to the "The only thing that defends you from a bad guy with a gun is a good guy with a gun" needs to watch Pulp Fiction, where two armed criminals hold up a diner and are stopped by two bad men (criminals, hit men) with guns.
So if your mantra were valid as an argument for keeping guns, you should also allow criminals to keep guns. Stop disarming criminals, it only makes you less safe.
So you went from wondering if you can get help to write AI to releasing AI in, what, two weeks???
No, apparently you've been doing this shit for thirty years:
Wow - "criminals may be the only ones with guns, but they have fewer guns and don’t need to carry them as a matter of course unless they’re expecting to meet armed resistance (such as armed police or other armed criminals)"
So your "logic" is that criminals don't need to carry guns unless they are dumb enough to expect resistance from law enforcement or other criminals who feel they have to carry guns because they are also dumb enough to expect resistance from law enforcement.
Yet, if you are unarmed, you should feel at peace that you will most likely be robbed or assaulted by a smart unarmed criminal, since he is smart enough to know you are unarmed.
Oh, and if you are still confused, I recommend watching certain scenes in Hollywood movies that I have selected because they fit my narrative.
Yes, my name is Wow and my opinions are based on fictional movies, particularly ones that have the word "fiction" in the title.
Now, back to reality...
Wow, you are quite possibly the dumbest person I have ever come across.
Now, please go and continue to collect those dividends from that profiting co-op you consider to be a criminal enterprise...
So you're dumb enough to think that criminals will carry guns, which would make them criminals even if they did nothing else, merely because they're criminals?
No, not even failed criminals are as dumb as you, betty.
Sadly, this thread has been hijacked by a subject that cannot be resolved. Before I abandon that, let me point out that high-powered machines for killing people at a distance cause death and injury, in flesh and blood people. The more there are, the more that happens. Death is death. Thankfully, so far the people who are recently crowding my inbox with efforts to persuade me to arm myself before they outlaw weapons (again, under Trump and a Republican Congress? the paranoid will believe anything, and of course life and liberty are less important than killing machines) have not yet made my life a place where I need a gun, hidden or otherwise, let along one that includes overkill in its capabilities. But it's clear that marketing, not safety, is their concern.
Obviously, I took the bait, so will save my "real" comment for a new frame.
Distraction 2: I did a lot of looking around on DDT. It's an interesting point that the arguments for rolling back DDT restrictions can be based on politics and obviously dishonest, but if one looks a bit further there is something to be discussed. One of the biggest problems is the undoubted fact that insects evolve faster than humans, and DDT kills other predators such as birds along with its harmful effects on humans. There are alternatives and as far as I was able to discern, they are better than DDT. I'm not going to look it up again, and perhaps I misremember, but it seems to me I read that the ban on DDT in critical areas is not total.
When it comes to us humans not reaching for the first chemical overkills we can find, regardless of side effects such as unintended consequences and the insect (and other disease vector) ability to evolve to resist the poisons, it is not a good sign for our future.
Panic does not seem to be a good guide for action, but expecting most people to move past the visceral to the thoughtful appears to be too much to ask.
Susan, about the time DDT was being heavily rolled back there as good evidence that the mosquitos were gaining tolerance of it. It is possible that if re-used in areas it has not been used in for a while that will go away temporarily.
The alternatives may be better or worse, I'm not sure, but there are two things that I think we know: 1) The alternatives do not have the long lasting persistence in the environment (that's goo) and 2) the alternatives are way more expensive (that is probably bad)
The main reason beyond what I discussed above that Malaria is hard to deal with is that as a species we don't want to spend any real money on those particular children beings saved from dying from this particular disease.
DDT is still allowed as a specific, though, and this is what it was meant to be used for, what it was licensed as.
But protection against malaria is more effective and cheaper done by putting netting up on the beds. Doesn't stop it being a problem when working out in the fields, but it is very effective protection at a time when you have no other defences, and it does stress the population to have a lack of easy food sources for a long stretch of time.
I think that last sentence is part of what what I was reaching for. Instead of a statement, how about a question: how can people discover the joy of knowledge and the usefulness of admiring hard work and intelligence in others and in themselves?
I remember like it was yesterday the day I finally admitted I didn't know and started the hard work of not looking for a secret or a key but learning my craft by looking hard and telling the truth (in my case, drawing). It wasn't easy, but it was probably the best day of my life.
Most people don't like to acknowledge that the mental clothing they wear can be armor against learning. It's hard to admit one doesn't know. It's hard to give up on secrets and magic thinking.
Anyone who wants to know how bad it can get might read John Brunner's The Sheep Look Up (1972).
Wow: Requires a bed. Oh, and a net.
But, uh, you can buy beds. And nets. Just like you can buy DDT or any insecticide.
But unlike insecticides,you can make a bed and net. They're cheaper too.
Know what you can also do? Impregnate the net with insetcicides. then when the mozzies come along they die trying to get to your sweet sweet blood.
This, apparently, is a really good way to kill mozzies. Instead of spreading enough to kill a trillion mozzies over an acre, you do enough to kill hundreds where they will go in their scores.
What about some sort of mosquito bait in a trap like a fly trap?
That's pretty much what an anti-moz bed net is, with the human as the bait.
Surely some sort of chemical super bait trap
away from camp/dwelling a bit in conjuction
with nets would help.
The problem with that is that the human is still open to being bitten. You could cover the animal pen, but we don't catch malaria from eating cows or goats, so the only benefit is healthier livestock. It would be done in addition to anti-moz impregnated nets.
Where there's a noted breeding place, DDT can still be added there as a spot insecticide. At the larval stage, though, there are much better options, including adding dragonfly larvae, if it's moist enough to allow them to survive.
Though I'm mostly at the limit now of what I know, since this is all from conversations with someone who works in Africa and Asia rather than me working there myself. So I couldn't give an assessment of the chemicals or predator species that could be used in what area.
Wow - "So you’re dumb enough to think that criminals will carry guns, which would make them criminals even if they did nothing else, merely because they’re criminals?"
Shorter Wow - "Someone planning on robbing someone would never carry a gun for fear of breaking the law"
Can't make it up.
Didn't stop you, though.
But I guess you're pissed off that you got sacked from your council job cutting trees in the road and have no other skill than "can climb a ladder without falling off all the time" and "can use a saw as long as it doesn't really matter if it's straight or precisely placed".
Now you're putting your imagination in quotes...
Again, can't make it up.
Good stuff. Thanks.
A problem with large scale off site traps is the behavior of the mosquitos. They tend to act on a very local basis, and they know a human from a non-human using a lot of different cues. It is very hard to attract them away from humans.
Right, the net that kills the insect surrounding the human, while possibly a hazard for the human, works as bait.
I can not verify if this is for real, but people in Tanzania and South Africa, AFAIK, swear by it: A tall ceiling with a constantly running ceiling fan will attract then either occupy or kill mosquitos.
The bit I know about that is the better theory on how the ceiling fan works is by fiddling about with the air currents and causing the mozzies to both be drawn up there AND lured by the pheromones and evaporated oils drawn up there too.
I have no idea whether it's valid or not, but I thought it an interesting hypothesis.
I never got malaria while sleeping under a ceiling fan.
Where did you get malaria then?!?!?!
English is such a silly language. Practically begs to be used in a sitcom, though: so many ways to get the wrong meaning!