Your question cannot be answered as written. The algorithm needs to be implemented as a MapReduce computation and then measured. In general, ScaleOut hServer can complete MapReduce computations in 100 msec. We suggest that you first test your MapReduce implementation on a small data set running on a small cluster (for example, four hosts, which can be licensed at zero cost using the Community Edition). Measure the latency as you increase the data set size until you hit either a resource constraint or a latency of 100 msec. Then if your algorithm demonstrates scalable speedup, you can add more hosts and proportionally increase the size of the data set until you reach the desired data set size. That will determine the number of hosts you need to complete the computation in 100 msec.
It may be the case that your algorithm’s computational complexity does not allow it to demonstrate scalable speedup, i.e., that the throughput measured as data points processed per second grows linearly as you increase the number of data points and the number of hosts running the MapReduce implementation. In this case, the latency may just grow too fast with the data set size to meet your 100 msec. constraint.