Thursday, December 30, 2010

Academic Resolutions

I don't normally make New Year's resolutions, and I don't plan on making any explicitly this year, but the coming of a new year feels like a good time to reflect on what we can do better.

So in the spirit of the tradition, I'll list some things many of us researchers (especially computer scientists) could work on in the coming year.
  • Finish writing papers at least a week before the (conference) deadline. That way, you can spend the last week improving the writing and rechecking the results.
  • Write up results immediately.  This will make writing papers much easier, and you can make sure your work is correct before moving on to the next thing.
  • Do reviewing in advance. We often have months to do our reviews, but often end up leaving them for the last week.  Even just reading a paper long before a review is due lets the ideas sink in a lot better.
  • Make journal versions (especially of conference papers that don't include full proofs). Until you've given a complete proof, people are justified in questioning your theorems.
  • Read more. It never hurts.
  • Enjoy work but also leave time to enjoy life.
Feel free, of course, to add to the list in the comments.

And have a happy New Year!

If you are serious about making resolutions and following through on them, check out beeminder. They seem to have good ideas on how to make your future self behave.

Monday, December 13, 2010

My Thoughts on NIPS 2010

This year, I attended my first NIPS conference -- NIPS stands for "Neural Information Processing Systems," but it has become one of the main venues for machine learning research in general. I've attended lots of machine learning conferences before, but somehow never managed to go to NIPS. This year, I had run out of excuses.

NIPS is broader than ICML, the other general machine learning conference, and it is certainly much broader than COLT and ALT, the learning theory conferences. I thought this would be a bad thing, as I expected that I wouldn't understand many of the talks and posters, but it turned out to have been one of my favorite (if not my favorite) conference experiences.

First, because most people at NIPS are in the position of not being familiar with everything, there wasn't the expectation that I would immediately know the model or the problem being tackled when I talked to someone. This strangely made things more, not less, accessible. Also, because lots of topics were outside my areas of research interests, I didn't feel internal pressure to attend many of the talks, and it allowed me to relax and have more time to talk to other researchers. And because NIPS draws a huge attendance, I got to reconnect with many people I hadn't seen for a while, and I had a chance to meet people whom I know only through their research or through correspondence.

On top of all that, within my areas of research, there were quite a few really good contributions, and I wanted to point out some of the papers that I found especially interesting. This is, of course, a list biased not only toward my interests, but also toward the papers whose presentations I happened to catch.
  • A Theory of Multiclass Boosting by I. Mukherjee, R.E. Schapire. This paper characterizes necessary and sufficient conditions for multiclass boosting and gives a new multiclass algorithm not based on reductions to the binary case. A nice an elegant paper, it won one of the best student paper awards.
  • Online Learning: Random Averages, Combinatorial Parameters, and Learnability by A. Rakhlin, K. Sridharan, A. Tewari. Building on the work of [BPS '09], this paper extends various notions of complexity to the online setting (Radamacher complexity, covering numbers, fat shattering dimension) and proves general online learnability guarantees based on these notions. This is similar in flavor to already known general offline guarantees.
  • Trading off Mistakes and Don’t-Know Predictions by A. Sayedi, M. Zadimoghaddam, A. Blum. This paper analyzes a model where the learner is allowed a limited number of prediction mistakes, but is also allowed to answer "I don't know," generalizing the KWIK model [LLW '08]. They prove some nice trade-offs, and there seem to be many potential interesting extensions.
  • Learning from Logged Implicit Exploration Data by A. Strehl, J. Langford, L. Li, S. Kakade. This paper discusses how to analyze the performance of a new contextual bandit algorithm by running it on previously recorded data. It's pretty intuitive that this should be possible, and this paper goes through the math carefully. This is a problem confronted by many content-serving companies, and I imagine this analysis will be quite useful.
Additionally, I also enjoyed David Parkes's talk on the interactions between machine learning and mechanism design -- this seems to be a very interesting area of research.

Finally, I also attended the NIPS workshops, which were rumored to be quite good, and they certainly lived up to their expectations. I especially enjoyed the workshop on Computational Social Science and Wisdom of the Crowds -- I decided to attend a workshop in an area about which I know very little, and all the talks were really good. In fact, from Yiling Chen's and Jake Abernathy's talks, I even learned about connections between prediction markets and online learning, so this workshop ended up being more closely related to my research than I expected.

Clearly, I've been missing out by not having attended the previous NIPS meetings. I'm planning to make up for it in future years.

Tuesday, November 30, 2010

To Boldly Research

In science, at least in computer science, in order for your research to have impact, you roughly have to do one of two things.
  1. Solve a known open problem that people care about.
  2. Solve a problem you made up and convince others that it’s interesting / important / impactful.
Often, a paper will do a combination of the two. Of course these aren’t the only ways to have impact – you might find a shorter proof of a known theorem, find a connection between fields others haven’t, etc. But discovering something interesting that nobody else has is a common theme.

So research requires some confidence, if not arrogance, to attempt. Successful researchers clearly need to find this confidence, and it’s not something that comes immediately. So I thought it might be helpful, especially to beginning graduate students (assuming any are reading), for me to spell out why I think succeeding in research is not as hopeless as it might first seem.

Lots of people think in the following chain, especially regarding solving known open problems: 1) Famous researcher X attempted this problem and didn’t get anywhere. 2) They’re clearly better / more experienced than I am. 3) Hence, I won’t be able to solve it.

The worst part of thinking this way is that it’s self-fulfilling – if you never make a serious attempt at a problem believing you can solve it, you probably never will. The second worst part is that this reasoning is flawed.

Lets assume that experienced researcher X attempted (but failed to solve) the problem you’re working on; it doesn’t mean that you can’t. There are a couple reasons for this.
  1. Famous researcher X is probably working on lots of problems and doesn’t have nearly as much time to devote to your problem as you do. By seriously applying yourself, you might succeed where X hasn’t.
  2. The process of doing research is partly random. You might just have an idea that X hadn’t.
  3. Some advances may have come out since X seriously attempted your problem. The average graduate student can solve lots of problems Gauss couldn’t solve, and it’s because science has made progress since then. This type of reasoning holds true on much smaller time scales.
There are also things you can do to increase the chances of being successful, or at least I’ve found that these things have helped me.
  1. Work on a problem that you’re actually excited about. This will make it much easier to put in the work needed to be successful. If research feels more like work than fun, then you might be doing it wrong.
  2. Find a problem that you think you have a chance in tackling. You should be able to feel in your gut if the problem “feels right.” Many problems are indeed too hard to attempt for beginning researchers.
  3. Don’t forget you have access to experienced person Y (your advisor) who might give you some insight that experienced person X didn’t have.
Of course I am also still a relatively young researcher, and getting research confidence is an ongoing process -- I still find working on hard open problems (rightfully) daunting. But if you can fool yourself into thinking you can succeed, even where others have failed, you might just turn out to be right.

Thursday, October 21, 2010

A Study in Purple

Yahoo! Research has a lab in New York city, right near Times Square. It is where I spent last year as as a "computing innovation fellow," and as promised, I want to blog about my experience.

Yahoo!'s New York lab isn't large. It has about 15 scientists, whose research spans several areas of computer science — machine learning, algorithms, economics and computation, storage systems, and computational social science. Most of the scientists are permanent researchers, but there also are a few postdocs. The lab often had talks, visitors, and interns to keep the atmosphere lively and interesting.

I mainly worked with John Langford on machine learning problems in computational advertising. A particular problem most search engines need to solve is what advertisements to show to their users, given some contextual information like a user's IP address, shared browser settings, and search query. The goal is to show users ads that they are likely to click, in order for the user to have a good experience and for the search engine to make a profit. However, how to do that is not clear because whenever an ad is shown, no explicit information is received about how well a different ad would have performed in the same situation. Effective algorithms need to balance exploiting strategies they have already learned to be good and exploring new strategies, possibly at some cost. In my time at Yahoo! we made good progress in understanding what sort of guarantees are possible for this problem, which is called the "contextual bandit problem" in the machine learning literature.

It was quite exciting to get to work on real-world internet scale machine learning problems, and have the results of my work have practical consequences. I was also very lucky to have a chance to work with John, who, in addition to being one of the foremost experts on this problem (and in machine learning in general), was a kind and patient host, from whom I learned a lot. I also got to work with Rob Schapire (my undergraduate mentor and collaborator thereafter), who was visiting the lab on his sabbatical from Princeton.

Overall, I was impressed by the flexibility scientists at Yahoo! have in choosing their problems, in having significant time to do research, and in having access to large amounts of real-world data at scales available at only a few other places. We computer scientists are quite lucky to have industrial research jobs as a serious option for research careers.

Now, I have started on a new adventure as a postdoc at Georgia Tech's Algorithms and Randomness Center. I have never been at such a big and "happening" department, and I am excited to get to work with many great researchers on interesting and important problems.

Sunday, September 12, 2010

Secret Numbers

I recently moved from New York to Atlanta to take a postdoc at Georgia Tech's Algorithms and Randomness Center. Georgia Tech is a great place to do computer science research, and I am excited to have started working there. I hope to soon blog about ARC and also about my time at Yahoo! Research, but not in this post -- this post is about something I have to do every time I move or get a new job: give lots of people my "secret" numbers.

In 1936, when the Social Security Administration arose from the New Deal, Social Security Numbers (SSNs) were assigned to people to keep track of their accounts. By 1986, the government started using these numbers for tax purposes. And today, Social Security Numbers have basically become national identification numbers.

It's a number we're supposed to keep secret because with it, others can steal our identity and unearth our private information. It is also a number that we're supposed to put in clear-text on tax forms, car purchases, credit card papers, job materials, apartment applications, college applications, doctor's visits, pet adoption forms, appliance rental paperwork, etc. Soon, I feel like I'll need to give my SSN to train conductors and clerks at the supermarket. Yes, I know I'm technically allowed not to give out my SSN number, but this may leave a potential employer or apartment landlord unhappy with me. Being left out of a job or house is possibly worse than giving an nth person my SSN.

Unfortunately Social Security Numbers are not the only examples of this sort of thing. The same problem arises with credit card numbers, routing and account numbers on checks, and even numbers on college id cards. I only pick on the Social Security system because it's the most glaring example of this phenomenon, and possibly the most serious.

The most frustrating thing about this situation is that there is no need for our system to work this way. For example, we could use public key cryptography, where each person has a public and private key (with the public key verifiable by the government) -- people could sign whatever forms they needed to with their private  keys without compromising their integrity. Or if that's too hard, we could have the government generate one-time-use checkable identification numbers that we could give out to untrusted sources. Or we could have two sets of numbers: one that we use for taxes/credit and another that's not secret, but checkable in some database. Or we could do away with this whole national ID thing all-together. I'm sure there are many better solutions.

But that's not how bureaucracy works, and sometimes the short-term cost of switching systems is too high for politicians to do anything about it, even if switching would be better in the longer term. For now, I just have to hope that the dozens of people I shared my secrets with this last week won't abuse our broken system.

This document has some more information about the security of your SSN number.

Monday, August 16, 2010

An Awful Waste of Space?

It's been 50 years since Frank Drake (of the famous Drake equation) started project Ozma -- humanity's first search for signals from alien intelligent life. I thought it might be fun to post on this topic, even though I have absolutely no expertise in it.

In 1950, ten years prior to project Ozma, Enrico Fermi posed a question that might have inspired Drake: if intelligent aliens exist, why haven't we found them (or they us) yet? This question is actually worth thinking about, for the following reasons:
  1. Earth is probably not the only planet in the entire universe on which intelligent life evolved. It's likely that (the building blocks of) life can form in many diverse conditions. There's probably even more evidence for these claims now than there was in 1950.
  2. Once there's life, evolution should take care of producing intelligent life (at least in some cases).
  3. Intelligent life would probably start explore the universe and expand at, perhaps, an exponential rate (hey, the universe is pretty big).
  4. So much intelligent life, going all about the universe, you'd think they would have run into us by now! (Or at least sent us some messages)
But of course, despite looking, we haven't yet found anything.  So, why not?

Here's a list of all the (remotely plausible) reasons I can think of, from least to most likely. I realize the events are not all mutually exclusive.
  • [very unlikely] Intelligent life is all over the place. But once aliens invent true virtual reality or something else that really floats their boat (and they always do), they have no good reason to go exploring the universe.
  • [very unlikely] Intelligent life is everywhere, and different aliens often run into each other. However, whenever this occurs, the more advanced aliens wipe out the less advanced ones. By the anthropic principle, we'll have to wait to be the more advanced ones.
  • [unlikely] Intelligent life has already found us on Earth but we don't know it. Either it is successfully hiding from us (more likely) or the government is successfully hiding its signals from us (much less likely).
  • [unlikely] Intelligent life occasionally appears, but always (or usually) manages to wipe itself out with the weapons it invents before getting the chance to meet us humans.
  • [possible] We are alone and very special. Intelligent life, has not appeared anywhere else. The reason we are even around to ask this question is the anthropic principle, God's will, or plain old extreme luck.
  • [reasonable chance] There is life all over the universe, but either it doesn't expand exponentially or the distances are just too great to cover, both for both intelligent beings and their signals.
  • [reasonable chance (most likely)] There are intelligent beings emitting signals that reach us from afar, but using methods (or languages) we haven't though of. Eventually, we'll figure it out and our world will change forever.
I'm sure I missed something, so I welcome your comments.

The image of the Very Large Array at Socorro, New Mexico, United States is under a Creative Commons Attribution-Share Alike 2.0 Generic license. Its author is here.

After writing this post, I have found a similar list on Wikipedia's article on the Fermi Paradox.

Sunday, July 25, 2010

Pay it Backward

I just finished writing peer reviews for an academic conference. The peer review system is the established method of quality control for scientific publications. When work is submitted for publication to an academic journal, experts judge it for correctness, importance, clarity, etc. It is then published or rejected. In computer science, conference publications, which are at least as important as journal publications, also go through the review process.

The question for this post is: how much peer reviewing ought one (researcher) do?

One easy answer is that you can't do more than you're asked to do. So if you're not being asked to review papers, no need to feel guilty! But when can you start to feel like you're doing too much reviewing? How much is the ideal amount?

The average paper gets reviewed by about 4 people, so it's not hard to compute what your "fair share" of reviewing should be. For each paper you submit, whether it is ultimately accepted or not, add to a running total: 4 divided by the number of coauthors on the paper. This sum, perhaps rounded up and say taken yearly, would be your fair share.

But just like taxes are progressive, so is reviewing load. We cannot expect a someone submitting his first paper to review 4 papers in payment. This person isn't yet known in the community and won't be asked to peer review, and either way he wouldn't yet be considered an "expert" in the field. To make up for this, more senior people have to do more reviewing -- it's only fair. Every established scientist had to submit her first paper without having "payed" for it upfront.

Only the situation is even more lopsided. Scientists need to make up for more than the reviewing burden they've placed on others when they were getting started. Research is very bottom heavy; for instance lots of graduate students leave research right after (or even before) finishing their Ph.D.s. These graduate students (at least in computer science) usually submit some publications, but don't review nearly their "fair share." So those who remain in research need to compensate, and they should -- research is their game.

Of course we have to take into account that very senior people do other forms of service that the rest of the researchers benefit from like chairing conferences, editing journals, and serving on committees. This should perhaps reduce their reviewing load.

My feeling is that in computer science a good time for the amount of reviewing you do to become as large as the amount of reviewing you ask of others is when you're a post-doc. You've stayed past graduate school, and you've become an expert in something -- time to pay your dues.

This year I've done at least as much reviewing as I've "used," but that's how it should be. It's not always the most pleasant work, but I realize it needs to be done. And if I'm lucky enough to stay in research, I expect my reviewing load to continue to increase.

Not the worst price to pay for having a job you love.

Tuesday, July 06, 2010

Staying Connected

When I visited Japan three years ago, I was stunned to see how many people poked at their phones as they rode the trains or walked along the streets. I had heard a lot about Japan's high-tech industry, and I thought this strange phenomenon was a local quirk limited to Japan, whose society is known for its attachment to technology.

In this respect, Japan was just ahead of the curve. Now, three years later, Manhattan is no different. During my daily commute, I spend considerable effort avoiding bumping into the many people staring down at their plams. I am even occasionally guilty iWalking myself. Whenever we're bored, our phones offer an easy escape to another world of email, tweets, and blogs.

Of course, I realize I'm not pointing out anything new -- countless articles are written on this subject, many coming to different conclusions about what the new phenomena of information at our fingertips, constant connectivity, and faster computing mean for our society: we're getting stupider, we're becoming smarter, the singularity is near, etc. One of the more fun conclusions from this trend is that the reason we haven't met any aliens is because they're busy playing computer games.

I don't know where this technological progress will lead. Information technology allows science to advance at a faster and faster rate, forming a positive feedback loop. Being constantly connected is addictive, and I imagine as technology improves, this trend will get stronger and stronger. I see myriad benefits, but also some downsides. As we spend our minutes checking email and tweets, we leave fewer hours for things that are immediately less fun but ultimately more fulfilling -- like reading a long novel or even simply thinking deeply without interruption.

I got thinking about all this during my recent trip to Israel, where I didn't have the constant connectivity I am now used to. I had no iPhone reception, the internet connection in my hotel was spotty, and I even used physical maps to navigate while driving. And even though it was a bit frustrating, in many ways it was nice to be off the proverbial digital leash.

While I think that technology, including information technology, is a big net plus for society, there's also some real danger of us ending up like those imagined aliens. I don't know if we have any power to change the course this arrow is taking, but I want to stay connected to the real world, even as I inevitably become more connected to the virtual one.

So while I'm happy you're reading this post, maybe it's time for a walk -- without your phone.

Saturday, June 05, 2010

Fruitful Fractions

In my first week of college, John Conway gave a lecture on one of his beautiful inventions, fourteen fruitful fractions, to attract freshmen to become math majors. This post is about these fractions.

Here are the 14 fractions, in order: 17/91, 78/85, 19/51, 23/38, 29/33, 77/29, 95/23, 77/19, 1/17, 11/13, 13/11, 15/14, 15/2, 55/1

Here's what you do with them. Start with the number 2, and keep multiplying your number by the first fraction on this list that produces an integer. (Note this is always possible because the last "fraction" is just 55.) Every time you see a power of 2 produced by this procedure, write down the exponent.

For example, if you start with the number 2, you get the sequence: 15, 825, 725, 1925, 2275, 425, 390, 330, 290, 770, 910, 170, 156, 132, 116, 308, 364, 68, 4, 30, 225, 12375, 10875, 28875, 25375, 67375, 79625, 14875, 13650, 2550, 2340, 1980, 1740, 4620, 4060, 10780, 12740, 2380, 2184, 408, 152, 92, 380, 230, 950, 575, 2375, 9625, 11375, 2125, 1950, 1650, 1450, 3850, 4550, 850, 780, 660, 580, 1540, 1820, 340, 312, 264, 232, 616, 728, 136, 8, 60, ..., 928, 2464, 2912, 544, 32, ...

I've underlined the first three powers of two that appear on the list: 4, 8, 32. Their corresponding exponents (of two) are 2, 3, 5. If you keep going, these exponents generate the prime numbers, and I really mean all the prime numbers, and only the prime numbers, in order! This looks very much like magic.

There has been a long history of research into finding a formula for efficiently generating prime numbers -- this is beyond the scope of this post. But the fourteen fruitful fractions, fun as they are, in some sense aren't as exciting as one might hope. The problem with these fractions is that it takes a huge number of multiplications to get each additional prime, and it gets worse and worse the further you go. Suffice it to say, this is not an efficient method.

But isn't it surprising that such a list exists at all? Yes and no. Mathematicians found this surprising, and it is indeed very surprising until you look at it from the point of view of computer science. You see, this procedure is a prime-generating algorithm, and these fractions and the list they create correspond to a Turing machine (properly programmed for this task), with states, memory, and all. To a computer scientist, that this happens is very natural, and it has to do with the Church-Turing thesis.

Maybe these fractions could be better used to entice freshmen into computer science?

These fractions make an appearance in Conway's Book of Numbers.

Shorter fruitful sequences have since been found, but as far as I understand, finding the shortest such sequence remains open!

Wednesday, May 26, 2010

The Legacy of Martin Gardner

You may have heard that Martin Gardner passed away a couple days ago at the age of 95. Martin Gardner was perhaps the most famous creator of recreational mathematics. His books, columns, and articles made math fun and accessible to many.

Martin Gardner
photo from the Oberwolfach Photo Collection 

Martin Gardner was best known for his Mathematical Games column in Scientific American from 1956 to 1981. He stopped writing the column before I was even born, but others carried on his legacy. In the 1980s Douglas Hofstadter (the famous author of GEB) took over with with his Metamagical Themas column, and afterwards Ian Stewart (a professional mathematician) continued the tradition with Mathematical Recreations.

Ian Stewart's wonderful column ran from 1990 to 2001, when I was just the right age to become hooked -- I probably read every one of Ian Stewart's articles after 1995. When I got to college, some of my friends pointed me to the earlier columns of Martin Gardner, and I quickly became a fan. It was not hard to me to see why Ron Graham said of him, "Martin has turned thousands of children into mathematicians and thousands of mathematicians into children." I'm sure that I owe some of my love for puzzles to Martin Gardner's legacy, and I was very sad to hear of his passing.

It would only be appropriate to end this post with a puzzle. Martin Gardner really liked this one and named it the "Impossible Puzzle."
Let x and y be two different integers. Both x and y are greater than 1, and their sum is at most 100. Sally is given only their sum, and Paul is given only their product. Sally and Paul are honest and all this is commonly known to both of them.

The following conversation now takes place:
  • Paul: I do not know the two numbers.
  • Sally: I knew that already.
  • Paul: Now I know the two numbers.
  • Sally: Now I know them also.
What are these numbers?

No cheating! I will update this post with the solution in a couple days.

A good memorial and an obituary of Martin Gardner.

The writer of the current Scientific American puzzle column, Puzzling Adventures, is Dennis Shasha.


Update (5/30/10): The answer is that the two numbers are 4 and 13. I might have written up a solution if good ones (including Martin Gardner's) didn't appear here.

Thursday, May 20, 2010

Fenno's Phenomenon

As November congressional elections approach and analysts fill the airways, I am reminded of a phenomenon called Fenno's Paradox. It goes like this: why is it that voters are usually dissatisfied with congress but keep re-electing their representatives at high rates?

I first heard of Fenno's Paradox as an undergraduate taking an elective course on congressional power. The professor tried to tackle the paradox by looking at the advantages of incumbency, the role of money in elections, the irrationality of voters, etc. But I never understood why this is a paradox at all.

Consider the following situation. Say each voter wants all federal spending to go to his or her own state and votes for representatives who feel the same. Each representative fights to bring all spending to his or her state, and this results in a compromise that the money is split among the states. The voters in every state are furious at the end result, and they blame congress for wasting their money. But all appreciate their respective representatives' valiant efforts.

This may even be not so far from what really happens. I haven't studied this carefully, but most people seem to be against protectionism and special deals, except when these deals favor their own states. Representatives (without national ambitions) are only answerable to their own constituents, so they have incentive to keep pushing for these deals, and in the end everyone is disgusted with congress.

The problem is that whenever I mention this to political scientists, they aren't convinced but don't really tell me why. And clearly I'm not the first person who thought of this "resolution." Perhaps in the social sciences coming up with a non-paradoxical interpretation doesn't resolve a paradox, or a paradox may just mean a seemingly contradictory statement.

Admittedly, I haven't read the literature on this phenomenon, so am I missing something? Perhaps the real resolution to this paradox is that in the future I should stick to posting only on things I know anything about.

Monday, May 17, 2010

Lessons from Future Past

After reading Isaac Asimov's 1950 novel Pebble in the Sky, I started thinking about what visions of the future people had around 50 years ago. While reading the book, I was particularly struck by a mundane passage where one character (Arbin) waits for his turn at the newspaper. This passage wouldn't have been jarring to me had the book's setting not been thousands of years in Earth's future, where spaceships routinely traversed the galaxy. Asimov (at least in this book) imagined people passing a newspaper around in an age of interstellar travel. This reminded me of Captain Kirk signing notepads brought to him by his crew or Princess Leia hiding messages in droids. Just send an email!

But who can blame people for imagining the future this way? The 50s and 60s followed an exciting time in physics -- in the preceding half-century, we had gone from searching for the ever-unfindable aether to discovering relativity and quantum mechanics, inventing televisions and the atomic bomb, and much more. Fusion power providing unlimited free energy was supposed to be just around the corner. Meanwhile, computer science was still in its early stages -- we sent a man to the moon still doing some calculations with slide rules. What happened in the next half a century blindsided everyone.

To be fair, physics has made its own remarkable advances since then (in ways people imagined) -- in everything from incredible materials to new and interesting theoretical developments. But in the last half-century, the real action was in computing. Some visionaries did foresee the rise of computers. But instant and universal access to information? Secure virtual payments? Zettabyte scales? Nobody saw that coming! People envisioned a Golden Age for spaceships and jetpacks, yet got one in computing first.

That some of our predictions would turn out wrong is of course expected, but I still can't help wonder what the next 50 years will bring. There's a near consensus that these advances will continue, that we'll spend more and more time online and computers will do more and more for us. And it's hard for me not to believe that this will help Golden Ages in other fields -- in medicine, now that we can quickly sequence genomes, run studies at unprecedented scales, and take advantage of nanotechnology; in math, as computers become more useful and we collaborate in new ways to solve open problems; even social sciences, as researchers get data they couldn't have dreamed of. Perhaps computers will even start to develop dreams of their own.

And while I think continued breakthroughs in computer science await (How can I not? I'm a computer scientist!), it's useful to remember that our predictions have never been perfect. Who knows where the next exciting advance will actually lie -- it might even be fusion reactors.

Friday, May 14, 2010

Caveat Surfer

A recent article got me thinking about laws and the internet. A couple days ago, the Latvian police apprehended a researcher at the University of Latvia who they claim hacked into government systems and obtained tax documents of various officials. Apparently, there was a security flaw that allowed anyone to access Latvian tax records by basically visiting the proper url. So this alleged "hacker" probably wrote a simple shell script to get 7.5 million tax documents and sent them to a journalist. He now faces a possible 10 years in jail for essentially executing an illegal command.

Now, we don't really know what he did, and I have little knowledge of Latvian law, but it got me thinking about what I'd ideally like the law to be. On one extreme, we have people who write viruses and purposely cause lots of damage; on the other extreme are people who steal wifi from the local coffee shop (even they can get arrested).

This Latvian story falls somewhere in between -- unlike the wifi "thief," the Latvian hacker probably should have known what he was doing could get him into trouble, but do we really want to live in a society where you can be sent to jail for visiting a website? It seems one problem is that visiting a website can include everything from buffer overflow attacks to illegal currency transfers. But this incident feels more like the Latvian government put sensitive information online and then decided to arrest anyone who accessed it.

I guess it's unavoidable that laws about the digital world, even more so than laws about face-to-face interactions, will have seemingly arbitrary lines between what's legal and what's not. But there should be some burden on people and governments to reasonably protect their own data -- and freedom for all of us to poke around a little.

Reddit has some interesting comments on this story.

For a blog on these types of issues by people who have thought about them more than I, visit Freedom to Tinker.

Tuesday, May 11, 2010

Around the Galaxy in 80 Years

Stephen Hawking recently wrote a fun article on time travel. One of Hawking's ideas is to build a giant spaceship and fill it with fuel. As the ship burned the fuel, it would go faster and faster, eventually reaching speeds near the speed of light. Hawking argued that we could use such a ship to travel to the future or to the edge of our Galaxy, perhaps in only 80 years.


Leaving aside my skepticism about our ability to do that sort of thing, a bigger question should be -- how would that even help? It seems impossible to get to the edge of our Galaxy in 80 years no matter how fast we go. The Milky Way is thousands of light years in diameter. Even going at the speed of light, crossing it would take thousands of years, so what could Hawking be talking about?

The answer is again relativity. In my previous post, I talked about how objects going near the speed of light experience relativistic effects. One of those effects is time dilation: clocks of moving objects go slower when viewed by (relatively) stationary observers. Another is Lorentz contraction: an observer will measure the length of a moving object as shorter in the direction of its relative motion.

So while we can't hope to go faster than the speed of light, we wouldn't really have to traverse thousands of light years either, at least not from our point of view. Because of Lorentz contraction, we would see the entire galaxy contract into a more manageable distance. So, in some sense, we can travel a light year in under a year. And if we then turn around and come home to Earth, we will also have traveled into the future, killing two birds with one giant fuel-filled spaceship.

This also answers the question from my previous post. From their point of view, muons halve only thrice between airplane height and the Earth's surface because to them it's 3000 feet, not 30000.

The Milky Way image is under a Creative Commons Attribution-Share Alike 2.5 Generic license. Its author is Digital Sky LLC.

Sunday, May 09, 2010

Saved by Relativity

Coming at you from space, flying close to the speed of light, about 1 muon passes through your head every second. Probably not the best thing for your health, but also not enough to really harm you.
image from nsf.gov / J. Yang
Muons happen to be very unstable: their half-life is near 1 millionth of a second (microsecond or μs). 100 muons become 50 in 1μs and 25 in another. They decay so fast that if every particle in the universe were a muon, within 1 millisecond (1000μs) they'd all be gone!

These muons fly toward Earth at near the speed of light, at about 1000 ft/μs. So, if every second 1 muon goes through your head on Earth's surface, calculations show (due to their decay) 2 should go though your head at 1000 feet, 4 at two thousand feet, and 2^30 at 30000 feet -- that's over 1 Billion muons, enough to kill you instantly!

But planes fly at 30000 feet all the time. So where did we go wrong?

To figure that out, we need some relativity. The special theory of relativity postulates that the laws of physics are the same in all inertial (not accelerating) reference frames and that the speed of light is always constant. Its consequences include:
  1. Moving clocks appear to go slower to stationary observers.
  2. An observer will measure the length of a moving object as shorter in the direction of its relative motion.
  3. E = mc^2 (which we don't need for this problem).
The first two of these consequences have a noticeable effect only at very high velocities and a strong effect at near the speed of light.

Remembering that muons travel fast enough to experience relativistic effects, it becomes clear what's going on. From our point of view they decay (what turns out to be) 10x slower. So instead of 30 halvings, we only see them go through 3. At 30000 feet only 2^3 = 6 muons per second go through your head, and you can survive that just fine!

This just leaves one puzzle: from the muons' reference fames it is our clocks that slow down, not their own. How do we explain them halving only thrice between airplane height and the Earth's surface from their point of view?

This post is inspired by a 2002 Princeton physics lecture by Peter Meyers. A similar story appears on the Stanford SLAC website.

Update (5/11/10): my following post answers the muon riddle.

Wednesday, May 05, 2010

Paradox Lost

Imagine the following game. You're given $1. Then, you keep flipping a fair coin, and every time the coin lands on heads, your money is doubled. The game ends the first time you see tails.  A little thought reveals that your expected payoff is $1 + (1/2)$2 + (1/4)$4 + (1/8)$8 + ...   = infinity!  No matter how much you're willing to pay per game, if you play enough times, you'll eventually come out ahead.   But how much would you pay to play this game once?

Intuitively, you'd be crazy to pay more than $20, and you'd be right not to. One reason is that your utility for money isn't linear: you just don't value 10 billion dollars 10 times more than 1 billion. Another reason is if you got lucky enough, there wouldn't be enough money in the world to pay your winnings -- capping the game at, say, a trillion dollars makes its expected value only around $20. But you wouldn't get lucky enough -- because one-in-a-trillion events just don't occur in real life.  That you shouldn't pay more than $20 to play this infinitely profitable game is known as the St. Petersburg paradox.

This paradox is related to the two envelopes problem in my previous post (required reading for the next part). The way I set it up, the two envelopes problem is also a game of infinite expectation.  Whether you exchange envelopes or not, your expected payoff a priori is infinite.  That's why you can swap envelopes without even looking at what's in yours and expect to gain about 22% -- 1.22 times infinity is still infinity! And the problem with both the St. Petersburg game and the two envelopes problem is that no matter how much money you get, it's less than the expectation.  In both games, you get a finite amount with probability 1, but what's that compared to infinity?  It's also related to why you always gain in expectation by switching.  So, can we make the game's expected value finite and still get the two envelopes paradox? The answer is no! Paradox lost.

I'm happy enough with this explanation, but I'd love to see other ideas about how to resolve these problems.  The two envelopes problem and others like it bothered me for a long time, but I'm trying to accept that often our intuitions simply break when we deal with infinity.

If you want more detail on these paradoxes, David Chambers has some nice papers, which proved useful in making this post.

Sunday, May 02, 2010

Two Envelopes

When I was an undergrad, a friend showed me a version of the following famous paradox.  Suppose somebody puts money into two envelopes, with one envelope getting thrice as much money as the other.  You pick an envelope at random and see that it has $90.  Now you have the option to exchange the $90 for the money in the other envelope.  Should you do it?  Well, not having much information, you figure it's about as likely that the other one has $30 as it does $270 and compute that in expectation you'll get 0.5($30)+0.5($270) = $150 if you swap.  This of course works for any amount x -- you can get about 1.67x in expectation by swapping.  You choose the envelope at random but always want to swap for the money in the other envelope -- this is the paradox! Does it trouble you?

It shouldn't -- because I cheated here.  I never said how the money was placed in the envelopes. For example, because there are infinitely many choices for possible amounts of money you can place in the envelopes, you can't choose from them uniformly at random.  And there are lots of intuitive distributions that won't work; perhaps the whole setting is impossible.

So let's be more careful.  Let's say the person placing the money chooses a positive integer y with probability 2^(-y) (this is an honest distribution) and then places 3^y dollars in one envelope and 3^(y+1) dollars in the other.  Now, when you open an envelope at random and see x dollars, you can figure out the probabilities of there being x/3 and 3x in the other.  If x = 3, you know you'll win by switching because the other envelope must have $9.  Otherwise it's not hard to see that you'll get x/3 with probability 2/3 and 3x with probability 1/3, giving you about 1.22x in expectation for switching -- not exactly the 1.67x, but still a gain.

Now again we get the same paradox: you choose an envelope at random but always win (in expectation) if you switch -- this even works if you don't look at the amount in the envelope!  Does this break probability or only intuition?

A note: various write-ups of this problem exist online, including one on Gowers's weblog from which I borrowed ideas for this entry.

Update (5/9/10): my following post is also about this paradox.

Thursday, April 29, 2010

Hello World!

My name is Lev. I'm a computer scientist working on machine learning and theory research.  I recently got my Ph.D. from Yale and am currently at Yahoo! Research.  If you want to learn more about me, you can visit my website.

On occasion, I have things to say that don't fit into 140 characters.  So, I've started a blog.  I expect to blog on a broad variety of topics: machine learning and computer science, science and technology in general, and other issues on occasion.  I'll try to keep my posts short (not much longer than this one) and not too technical, but I hope they'll be at least somewhat insightful and entertaining.   

The title of this blog, "Room for Doubt," is taken from a lecture by Richard Feynman on the value of science, where he talks about the role of doubt in science. 
The scientist has a lot of experience with ignorance and doubt and uncertainty, and this experience is of very great importance, I think. When a scientist doesn't know the answer to a problem, he is ignorant. When he has a hunch as to what the result is, he is uncertain. And when he is pretty darn sure of what the result is going to be, he is in some doubt. We have found it of paramount importance that in order to progress we must recognize the ignorance and leave room for doubt. Scientific knowledge is a body of statements of varying degrees of certainty -- some most unsure, some nearly sure, none absolutely certain. 

I hope this blog will leave both literal and figurative room for doubt.  Hence, my posts may be occasionally critical and skeptical, but I don't expect that will be my main focus.  But leaving room for doubt is important, and I couldn't think of a better way to remind myself than to put it in the name of this blog.