Saturday, April 6, 2019

The Beatles Problem A - Codeforces #549 DIV1

Few days ago I was trying to solve the beatles problem which was a little bit tricky and I had to analyze it to find enough observations in order to come up with a good solution .. In this article I will go through my observations step by step and hope its easy for you to digest. 

The editorial of the problem is very concise and not easy for everyone to understand so I will be as detailed as possible in this article to make sure you get the idea by going through my observations one by one and of course combine them to code a solution.

Observation #1:

 

Lets assume the jump we take each time is L, how many jumps do we need to get to S again (S is the start position)? Lets find out by example:

Note: For this observation we can completely ignore a and b

Assume we have n = 2, k = 3 and we start at S = 2:

If L = 1 then the path will be {2, 3, 4, 5, 6, 1, 2} => 6 jumps!
If L = 2 then the path will be {2, 4, 6, 2} => 3 jumps!
If L = 3 then the path will be {2, 5, 2} => 2 jumps!

From the above example we figured out that #jumps = #cities/gcd(#cities, L)

Where gcd is Greatest Common Divisor.

Observation #2: 


We need to figure out L as its much easier to use to brute force a solution (I will show you why in the next observation).


To be honest I didn't get to that observation until I took a small hint from a solution which I couldn't understand (The solution was checking 4 possible values to find L) :O but after some manual work I figured it out.

The 2 X markers on the line are the first and second cities .. from the problem statement we know that city1 has the nearest restaurant at distance A and city2 has nearest restaurant at distance B -> Nice!

We have 2 possibilities for nearest restaurant of the first city .. either its before or after (marked as A1, A2) .. Same for nearest restaurant of the second city (marked as B1, B2).

We know also from the problem statement that the distance between any two neighboring restaurants = K, That means the distance between A* and B* is i*k, where i is any value between 1 ... n (We will know why its < n in the next observation).

- Remember we need to figure out the distance between the 2 Xs which is ???

- Yes, correct - its L

That leaves us with the following 4 possibilities:

Possibility #1: Restaurants are located at (A1, B1):

-> L = i*k + (B1 - A1)

Possibility #2: Restaurants are located at (A1, B2):

-> L = i*k + (-B2 - A1)

Possibility #3: Restaurants are located at (A2, B1):

-> L = i*k + (A2 + B1)

Possibility #4: Restaurants are located at (A2, B2):

-> L = i*k + (A2 - B2)

 

Observation #3:

 
Now we need to iterate to find the value of L, but what are the boundaries to find L?

We know that L = i*k + c, where c is one of the possible values {(A+B), (A-B), (-A-B), (B-A)}. 

We also know we have only N*K cities .. so i could be any value between (1 ... N), right?

Now with the above 3 observations we can easily code a solution .. here is my C++ solution for this problem.

#include<iostream>
#include<map>
#include<set>
#include<algorithm>
#include<cstring>
using namespace std;

main() {
        long long n, k, a, b;
        cin >> n >> k >> a >> b;

        long long possible[] = {a+b, a-b, -a-b, b-a};

        long long mn = n*k, mx = 1;

        for (int i = 1; i <= n; i++) {
                for (int j = 0; j < 4; j++) {
                        long long x = possible[j]%k;
                        if (x < 0) x += k;

                        mn = min(mn, n*k/__gcd(n*k, i*k+x));
                        mx = max(mx, n*k/__gcd(n*k, i*k+x));
                }
        }

        cout << mn << " " << mx << endl;
}

If you have questions/feedback or spot a mistake please let me know in the comments.

Sunday, January 13, 2019

Java ForkJoinPool with an exmaple

ForkJoinPool was introduced with the release of Java 7 to solve a very particular set of problems that tend to be hard to solve with any other thread pool implementation.

The class is designed to work with divide and conquer algorithm: those where a task can recursively broken into broken sub-tasks.

It looks just like any other thread pool e.e.,ThreadPoolExecutor class, it implements the Executor and ExecutorService interfaces. It uses an unbounded list of tasks that will be run by the number of worker threads configured or by default the number of CPUs exist.

Example: Parallel Merge Sort


Sorting an array of 1 million elements. We have 3 main sub-tasks to sort the array:
  • Sort the first half of the array
  • Sort the second half of the array
  • Merge the two sorted sub-arrays
The base case is when its faster to use insertion sort to sort the sub-array (lets assume when the array has 10 elements) of course makes more sense than using parallel merge sort here. In the end there will be 1 million tasks to sort the leaf arrays, more than 500,000 tasks are needed to merge those sorted sub-arrays, and more than 250,000 tasks to sort the next merged sub-arrays .... and so on.

The most important point to notice is that none of the tasks can complete until the tasks that they have spawned have also completed. Here is when the ForkJoinPool comes very handy. Of course its doable through a ThreadPoolExecutor but can't be done as efficient as ForkJoinPool and the implementation is much more complex.

In ThreadPoolExecutor a parent task must wait for its child tasks to complete, A thread cannot add another task to the queue and then wait for it to complete as once a thread is waiting it can't be used to run one of the sub-tasks.

bla bla bla ... show me the code.

import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.is;

import java.util.Arrays;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.RecursiveTask;
import java.util.concurrent.ThreadLocalRandom;
import java.util.stream.IntStream;
import lombok.AllArgsConstructor;

@AllArgsConstructor
public class ForkJoinPoolSample {

  private static int[] elements;

  @AllArgsConstructor
  private static class MergeSortTask extends RecursiveTask<Integer> {
    private int first;
    private int last;

    @Override
    protected Integer compute() {
      int len = 0;

      if (last-first <= 10) {
        Arrays.sort(elements, first, last+1); 
        len = (last-first+1);
      } else {
        /* Sort two sub-arrays */
        int mid = (first+last) >>> 1;
        MergeSortTask leftSubtask = new MergeSortTask(first, mid);
        leftSubtask.fork();
        MergeSortTask rightSubtask = new MergeSortTask(mid+1, last);
        rightSubtask.fork();

        len += leftSubtask.join();
        len += rightSubtask.join();

        /* Merge two sorted sub-arrays */
        MergeTask mergeTask = new MergeTask(first, mid, last);
        mergeTask.fork();
        mergeTask.join();
      }

      return len;
    }
  }

  @AllArgsConstructor
  private static class MergeTask extends RecursiveTask<Integer> {
    private int first;
    private int mid;
    private int last;

    @Override
    protected Integer compute() {
      int[] tmp = new int[last - first + 1];
      int left = first, right = mid + 1, indx = 0;

      while (left <= mid && right <= last) {
        if (elements[left] <= elements[right]) {
          tmp[indx++] = elements[left++];
        } else {
          tmp[indx++] = elements[right++];
        }
      }

      while (left <= mid) {
        tmp[indx++] = elements[left++];
      }

      while (right <= last) {
        tmp[indx++] = elements[right++];
      }

      for (indx = 0; indx < tmp.length; indx++) {
        elements[first + indx] = tmp[indx];
      }

      return tmp.length;
    }
  }

  private static void createRandomInts() {
    elements = new int[100000];
    final ThreadLocalRandom random = ThreadLocalRandom.current();

    IntStream.range(0, 100000)
        .forEach(i -> elements[i] = random.nextInt());
  }

  public static void main(String[] args) {
    createRandomInts();

    long before = System.currentTimeMillis();
    int n = new ForkJoinPool().invoke(new MergeSortTask(0, elements.length-1));
    long after = System.currentTimeMillis();

    System.out.println("Sorted " + n + " Elements in " + (after-before) + " ms.");

    boolean sorted = IntStream.range(0, elements.length-1)
        .allMatch(i -> elements[i] <= elements[i+1]);

    assertThat(sorted, is(true));
  }
}


From the doc:

fork(): Arranges to asynchronously execute this task in the pool the current task is running in
join(): Returns the result of the computation when it is done

Those methods use a series of internal, per-thread queues to manipulate the tasks and switch threads from executing one task to executing another. Of course all of that is transparent to the developer.

References



Saturday, January 5, 2019

Java Weak References

Reusing objects is important in Java but also it can cause memory and performance issues if the objects to be reused can't be freed out by GC. Sometimes we need to reuse objects as soon as they are still being referenced or as soon as they may have good chance to be used in the future.

Weak references give us the opportunity to achieve this by not largely affecting memory and performance of our application (In a more GC-friendly). In today's post I will go through the differences between strong and weak references in Java and the effect of them on garbage collector.

Strong References

 

Strong references are the default type/class references in Java. Objects that are strongly references aren't eligible for GC neither minor nor full GC until they are not strongly referenced anymore. 

 

All object references in Java are strongly referenced unless explicitly specified, see below sections.


StringBuilder builder = new StringBuilder();

Weak References


Weak reference objects are not default and must be explicitly specified. This type of references is to maintain references to objects that are `WEAK` meaning that if the object is eligible for GC `not strongly referenced anymore` but still `weakly referenced` the object can be collected.

StringBuilder builder = new StringBuilder();
WeakReference<StringBuilder> weakBuilder = new WeakReference<>(builder);

Soft References

 

This type of references is also a weak reference that remains in memory for longer - It resists minor GC until memory is really needed to be reclaimed (Application is reaching OOM then it will clear all soft references). Soft references are essentially one large, least recently used (LRU) object pool (cache).

The JVM keeps track of the last access to each reference and calculates if the soft reference is eligible for GC. This is controlled by the JVM `-XX:SoftRefLRUPolicyMSPerMB` flag which is by default = 1000 = One second of lifetime per free megabyte in the heap

StringBuilder builder = new StringBuilder();
SoftReference<StringBuilder> weakBuilder = new SoftReference<>(builder);

Phanotm References

 

Phantom references help us doing finalization to avoid implementing the `Object.finalize` method which could have negative implications to the application as its not deterministic and can make the object reachable again or negatively impact application performance.

Phantom references are different from Weak/Soft references in that GC will not collect a phantom reachable object until its cleared out all acquired resources.

`PhanomReference` class accepts two parameters the referent object and a `ReferenceQueue` which is then used to enqueue objects eligible for clearance.

ReferenceQueue q = new ReferenceQueue();
PhantomReference pr = new PhantomReference(object, referenceQueue);
// Later on another point
Reference r = q.remove();
// Now, clear up any thing you want

Quick Summary (From: Java Performance - The Definitive Guide)

  • Indefinite (soft, weak, phantom) references alter ordinary lifecycle of java objects, allowing them to be reused in ways that may be more GC-friendly.
  • Weak references should be used when an application is interested in an object only if that object is strongly referenced else where in the application.
  • Soft references hold onto objects for (possibly) long periods of time, providing a simple GC-friendly LRU cache.
  • Indefinite reference consume their own memory and hold onto memory of other objects for long periods of time; they should be used sparingly.