IMO 2016: Overview
This post details my overall experiences as an observer for Bulgaria at the 57th international math olympiad, which took place in Hong Kong, between July 6th and July 16th 2016; there will be a subsequent post listing day-to-day impressions, plus some photos.
That was my first time behind the scenes of the IMO, so pretty much everything was new to me. Hopefully this post can be of use to anybody curious about the inner workings of the olympiad!
The happy news
The happy news is that this time our team did remarkably well compared to the last few years; we ranked 18th among about 110 countries1, and we haven’t done that well since 2008. Moreover, we were impeccable on the easy problems 1 and 4 (with 84/84 points), which helped everybody get a medal (which hasn’t happened since 2010). We ended up with 3 silver and 3 bronze medals, a solid batch. Overall, I think that’s pretty impressive for a 7-million country2.
The problems
The problems were for the most part beautiful; my favorites are 3,5,6 (3 and 5 are both from Russia!), and I dislike 4, which to me is just a sequence of calculations with no significant ideas involved (though one can argue that 1 can also be solved like that). Problems 1 and 2 were fairly standard.
The consensus among the individual members of the jury whose opinions on the difficulty I got to hear (conditional on the problem’s position) seemed to me to be the following:
- Problem 1: hard
- Problem 2: easy
- Problem 3: somewhat hard
- Problem 4: somewhat easy
- Problem 5: somewhat hard
- Problem 6: easy
This matched my views. Such deviations are normal, since you can’t make a perfect exam with such a small shortlist (8 problems from each area) and so little time. However, I do think the jury’s opinion was swayed by the unfortunate position of problem 1 as G1 (easiest geometry) and problem 6 as C7 (next-to-hardest combinatorics) in the shortlist. But more on that later.
The consensus among the contestants (given the results) seemed to be that we underestimated 2 and 6, though the unforeseen difficulty of 6 was likely psychological (because it’s 6). So while some people (especially on AoPS) were quick to predict high cutoffs, things ended up at 29 for gold.
The ‘flat distribution’3 trap
There was a point during the problem selection when there was a real danger of the vote swinging towards an easy exam that wouldn’t distinguish well between contestants. The thing is that there are now many “new” countries at the IMO which have a (understandable) tendency to vote for problems more accessible to the less technically prepared contestants. I believe that most, if not all, of the problems at the olympiad should be as accessible as we can make them, and rest on simple but creative arguments, as opposed to heavy theory and standard machinery. A notorious example of the latter is problem 6 from IMO 2007. As for positive examples, I think problems 5 and 6 in this IMO were perfect. However, my feeling is that on average there is a non-negligible negative correlation between difficulty and accessibility among the shortlist problems; I’m guessing the reason is that it’s just darn hard to come up with perfect olympiad problems.
Anyway, something – maybe it is this correlation trap, or maybe they just want easy points for their teams – seemed to drive the newer countries to prefer easier problems, which would have in turn led to an exam that doesn’t distinguish contestants much. That’s something we don’t want4, because it makes us feel like the whole IMO was a waste of time. Happily, several conscientious team leaders spoke up against the “flat” motion, and miraculously the jury changed their minds (yes that’s something people don’t usually do).
Considerations during problem selection
Beauty will save the world?
I was surprised by how much non-mathematical considerations can shape the exam. For example, well before any problems are chosen, all team leaders vote in the so-called beauty contest where problems are rated on 3-degree scales according to their difficulty and beauty. What surprised me wasn’t that problem 6 was rated as the most beautiful in the shortlist (it simply is very, very neat); it was that it became problem 6 instead of problem 3 or 5, which would have made more sense given its difficulty. This decision seemed to be a combination of three things:
- the position of the problem as C7 in the combinatorics section of the shortlist, which probably made it seem harder than it is;
- the choice of problems 1,2,4 and 5: a total of four easy and medium problems, one from each area, are chosen before the hard problems, but are not assigned exact positions on the exam beyond that5. So by the time you’re choosing how to order the hard problems 3 and 6, you face additional constraints; and
- the jury’s overwhelming consensus that #6 must be an exceptionally beautiful problem.
I find the last reason convincing, but not convincing enough in the context of this exam; given the results, I believe many students were misled by this ordering of the problems and didn’t try problem 6 just because it was problem 66.
Half geometry, half something
Another interesting, though not as prominent, feature of problem selection was that some team leaders argued that certain problems just can’t be put into one of the four neatly labelled boxes ‘algebra’, ‘combinatorics’, ‘geometry’ and ‘number theory’. In this year’s exam, these were problems 3 (formally number theory) and 6 (formally combinatorics). Problem 3 is a glorious mix of number theory and geometry (and some might argue combinatorics), while for problem 6 the geometric nature of the configuration matters – it doesn’t work with pseudosegments (a set of arcs every two of which intersect in at most one point).
This way, the supporters of this point of view argued, we get one more geometry problem out of those two, so people shouldn’t be sad that only one problem from the geometry part of the shortlist ended up in the exam. As another example, during the selection a problem from the algebra section of the shortlist competed for a spot among the easy/medium problems as if it was combinatorics.
I’m a big fan of this way of thinking, and I think it works especially well with the mechanism that picks one problem from each area for the easy/medium problems first. For one thing, the ideal number of problems from each area at the IMO is 1.5, and once you’ve chosen each 1, you feel a little awkward; but that’s backward thinking, already assuming you’re sticking to the mechanism for easy/medium problems. I believe a better reason is that often the most beautiful and hard problems are both beautiful and hard precisely because they combine insights from different areas. In this sense, we had a good IMO.
Geometry should be solved by geometric, and not algebraic, intuition
This makes total sense, and I’m a big fan. There was a geometry problem easier than G1 in the shortlist which was quickly shot down because it was easily amenable to various computational techniques.
On the other hand, one person on our team did a completely computational solution of problem 1, too (and got full marks).
Ordering within the shortlist
Finally, this is somewhat trivial, but it does matter more than you might think: I already mentioned above that in the case of problem 6, its position as C7 in the shortlist mattered. In fact this happens with many problems. At the IMO there isn’t much time for team leaders to get acquainted with the solutions to all the problems in the shortlist, not to mention to try and solve them by themselves. So what happens is that the way the problems are ordered by the problem selection committee in the shortlist is given more credibility than it probably deserves. So team leaders could really use some helpers during problem selection. This brings us to…
Your part as an observer
The main thing to know is that pretty much the only thing observers can’t do is vote – only the team leader of each country can – but even so, they can consult with the leader to influence their vote. There was ample opportunity, both during breaks and during discussions, to chat with leaders about the current situation.
Apart from that, observers can offer help at each stage of the olympiad. The deep, complicated principle at work here is that two heads are better than one:
- upon arrival, they can get to know the shortlist, and give according advice to the leader. For example, what we did with my leader was split the problems by area, according to our favorites: he took algebra and number theory, and I took geometry and combinatorics.
- when marking schemes are out, observers can similarly get to know their ins and outs.
- during the competition, when contestants’ questions arrive, the fittest observers can outrun other team leaders walking between the table where questions arrive and the queue for sending back the answers, thus delivering the answers to their team members several minutes earlier7
- after the competition, they can help grade the contestants papers, so that the leader can have a better idea of any potential weak spots well before coordination8.
- observes can participate in coordination, though keep in mind that during a given problem’s coordination, only two people among the team leader, deputy leader and observers can represent a country.
For a concrete example of how observers can even help changing the final score, during one of our problems’ coordination, there was a student from our team with some partial results that we believed were worth 1 mark. The solution could be completed using Gaussian integers, as in one of the official solutions; however, the student’s paper had no mention of that idea. The precise mix of arguments he had given turned out to be a one-of-its-kind at the olympiad, so it was up to the head coordinator for that problem to make the final decision. He ended up insisting on 0 marks, unless we could show them a continuation of the student’s ideas without Gaussian integers. We had about one hour to figure it out, and luckily, with the last 4% of my phone’s battery I found a solution on AoPS which we could use, and we got our 1 mark.
Beyond helping, observers are free to attend all jury meetings.
All this is very good if you’re a country with enough sponsors that can send observers along; however, it seems that poorer countries are at a disadvantage because they’re missing all these benefits.
There is the related question of whether team leaders can send scans of contestants’ papers to people back home; while until this year the rules allowed for that, the jury accepted a change according to which leaders can consult people not at the olympiad, but cannot communicate the precise details of the papers (such as scans) with them. This policy makes sense by itself I think, but it deepens the above problem…
The atmosphere and people at the 57th IMO
It was amazing to witness mathematicians from 110+ countries come together for a cause that serves the brightest high-school math students in the world. While one can argue that different countries have different interests (for example, countries with well-trained students might prefer different problems to countries with inexperienced students), jury meetings were conducted in a spirit of goodwill, and, what’s more notable, rational arguments were able to change the vote several times.
My only disappointment was during the final jury meeting, when medal cutoffs are decided. This year there seemed to be about 25 countries missing during this meeting. For other jury meetings that’s not a tragedy, but in the final meeting you need 2/3 ofall jury members to vote ‘yes’ if you are to allow more than exactly half the contestants to get medals. So what happened this year was that we needed 72 votes to give a dozen more medals instead of a dozen fewer, and there were about 80 jury members present… so it didn’t work.
Other than that, the atmosphere was relaxed, and it was not unusual for people to joke during the jury discussions (kudos to Geoff Smith, the president of the IMO, for being an especially jolly guy).
I got to talk to some of the team leaders, and it seems most are involved in academic mathematics through teaching or research. Unfortunately, language still seems to be somewhat of a barrier. In the first several jury meetings, the policy was for more complicated questions to be translated into the other official languages of the IMO – Russian, French and Spanish – but at some point we dropped it, and people seemed to be OK with that. But I still felt that some leaders weren’t at ease when addressing the jury in English, and that gave native English speakers a bit of an advantage in terms of persuasiveness.
Events and logistics
This year’s IMO was extremely well-organized, thanks to our diligent Hong Kong hosts (and, likely, to the generous sponsorship). There wasn’t much free time for leaders and observers, but the organizers managed to cram in some cool events. Hong Kong was as beautiful as it was warm and humid (a lot), and the Hong Kong University of Science and Technology’s campus offered some stunning views.
Perhaps most memorable among the events was the “Forum on mathematics in society”, or rather one of the talks in it, in which Professor Man Keung Siu raised the question “Does society need IMO medalists?”. Thanks to Professor Siu, the full text is available here, and I warmly recommend it. The main thesis was that while society doesn’t need the IMO medalists per se, it does need people who are aware of the role of mathematics in the world and in human civilization, and are not afraid to reason about its basic principles. So the value of the IMO is that it drives a large mass of people worldwide to improve their mathematical skills. Here’s a particular excerpt that serves as a bit of an answer to the question in the title (it’s actually taken from the book “Alice in Numberland: A Students’ Guide to the Enjoyment of Higher Mathematics”):
My good friend, Tony Gardiner, an experienced four-time UK IMO team leader, once commented that I should not blame the negative aspects of mathematics competitions on the competition itself. He went on to enlighten me on one point, namely, a mathematics competition should be seen as just the tip of a very large, more interesting, iceberg, for it should provide an incentive for each country to establish a pyramid of activities for masses of interested students. It would be to the benefit of all to think about what other activities besides mathematics competitions can be organized to go along with it. These may include the setting up of a mathematics club or publishing a magazine to let interested youngsters share their enthusiasm and their ideas, organizing a problem session, holding contests in doing projects at various levels and to various depth, writing book reports and essays, producing cartoons, videos, softwares, toys, games, puzzles, … .
So there you go, the IMO is not completely useless!
Acknowledgements
I’d like to thank the people involved in the training of the Bulgarian team, who invited me to be an observer at this IMO.
I’d also like to thank the American Foundation for Bulgaria for their generous support, which made it possible for me to attend the entire event free of charge. More importantly, the AFB has been consistently sponsoring the Bulgarian national math team for more than 10 years now. Finally I’d like to thank the “Georgi Chilikov” Foundation for their support for the team, and their broader contributions to education in Bulgaria.
- However, people reporting on the IMO often fail to mention that there are usually several countries that send fewer than six people, so it’s not fair comparing team results across all countries. This year, there were about 15 such countries, so the correct number is more around 95.
- Though we have done more impressive things in the past: for example, see our resultsfrom the 90s. Also, it seems to be an interesting mathematical problem to normalize IMO performance with respect to country population.
- Here, ‘flat distribution’ means that when you plot the scores of all the contestants in order, the resulting graph is mostly composed of several flat plateaus
- Some related stackexchange discussion here
- This is a recent mechanism (since 2012), and it seemed, until this year, to have the side effect of forcing problems 3 and 4 to be geometry.
- On the bright side, our team got lucky, and several people tried it. In the end, two people solved it, which gave us a considerable advantage.
- OK, this was probably just a by-product of the logistics of this IMO’s question answering process.
- Coordination is the process through which team leaders and observers negotiate the marks on their team’s papers with the official graders of the IMO, the coordinators. Since many of the papers are in a language unknown to the coordinators, they often need some additional clarifications. But coordination also offers the opportunity to argue for more credit when a contestant’s solution deviates from the marking schemes.
A. Makelov