Google Code Jam 2014 | Round 2

Posted on 31 May 2014

By sping128

Today GCJ 2014 Round 2 contains 4 problems. The top 500 contestants will advance to Round 3. In this round, at least 50 points with small penalty time are required to pass this round. And yeah! I got 413rd ranking. This was my first time that I got a GCJ t-shirt and advanced to Round 3.

let’s talk about the problems. The first problem (A: Data Packing): given N files with their capacities and disc capacity, what is the minimum number of discs are required to store all the files with 2 conditions:

  1. A disc can contain only at most 2 files.
  2. Each file can’t be divided to stored separately in different discs.

This problem was the easiest one of this round. An O(nlogn) solution can get accepted. My greedy solution is to loop from the smallest file a and for each of them, find the biggest file that can be put into a disc along with this file a. If we can’t find one, just put a into a disc. We repeat until no files left. A nice data structure for doing this is multiset (C++), because we can call lower_bound() and there might be duplicate file capacities.

The second problem (B: Up and Down): give a list of integers, we would like to rearrange the sequence to an up and down sequence (one where A_1 < A_2 < A_3 <...< A_m > A_m-1 >...> A_n for some index m). We can rearrange them by swapping two adjacent elements of the sequence. The problem asked you to find the minimum number of swaps needed to accomplish this. First my first attempt to solve this problem was to find where we should place A_m (it’s obvious that A_m is the largest element of the sequence). There are n possible positions. We can try moving A_m to each position, then find the number of inversions of the left part and right part of the sequence. This idea was completely wrong. We can’t prove anything that doing this will give us the minimum number of swaps. After thinking for a while, I came up with a solution which is starting from the smallest element first. We know that the smallest element need to be on either leftmost end or rightmost end. We can just pick the one end that is closest to the smallest element’s position (Whether moving it to the left or right end doesn’t affect the remaining sequence. So it’s better to move it to the closest end). That’s it! we do the same thing to the other elements.

After finishing the first two problems, I only had 1 hour left. So I changed my strategy to just solving the small inputs on the last two problems. I will give brief ideas of how to solve the small inputs on the last two problems. The ideas are quite straight-forward to me. For C: Don’t Break The Nile , we want to find the maximum flow of water from the south side to the north side of the river. Each grid cell can carry only 1 unit of water to its adjacent cells. Thus, let’s build a network flow graph.

  1. For each cell, we create 2 nodes called lower node and higher node, and have an edge with capacity = 1 going from the lower to the higher node. If the cell contains a building, set the edge’s capacity to 0.
  2. Now create edges between the adjacent cells: connect the higher node of a cell to the lower node of each adjacent cell with an edge of capacity 1.
  3. Make a new node called source and connect it to all the lower nodes of the south side cells.
  4. Make a new node called sink and connect it to all the higher nodes of the north side cells.

That’s all. Then the answer to the problem is the maximum flow on this graph. This method, of course, is too slow to pass the large input set that the height of the river can be up to 10e8.

Now, come to the last problem (D: Trie Sharding). Given M strings, we would like to divide it into N smaller groups and make a trie data structure for each group so that the total number of nodes used to make tries is maximized. So we want to find the maximum number of nodes of all possible group arrangements. For the small dataset, M = 8, N = 4, we can just bruteforce all the possible ways to make groups, make tries for each of them. We finally return the smallest number of nodes of all the possible ways of dividing up the strings into groups. This is all about implementation. For the large dataset M = 1000, N = 100, I still have no idea how to solve it.

This post is a bit long, but that’s it for GCJ2014 Round 2. I really like the problems which I can categorize them to be in a hard level for me, but improving requires solving hard problems :). I’m planning to join Round 3 as well, but before that we should read the analysis of this round and try coding it up!\

Plus

Now that, I’m starting to write blogs seriously in order to summarize what I learn from programming competition. I think it’s better to rethink about problems and how well I do during each contest. This can help me prevent repetition of the same mistake. Writing can make ideas solid so that when I see the similar problem again, I can pick up the idea pretty fast. Furthermore, it might be able to guide others about the ideas to solve problems. Thanks to this blog which motivated me to start writing blogs again.