18-749 Evaluation Report
Team 2: The House
Chief Experimenter: Joohoon Lee
Members: Paul Cheong, Jun Han, Suk Chan Kang, Mohammad Ahmad

Experimental Setup

  - Each client run on separate machine
  - Two methods chosen for the experiment (two-way invocation) are highlighted in the Table 1.
  - The middle tier have 2 replicas
  - Our implementation has two replicas and fail-over, but the assumption of this experiment was that there is no fault-tolerance, thus all of our measurements were taken without any crashes on the server.
  - The following parameters are used (Total of 48 combinations)
 
Number of clients: 1,4,7,10
Size of reply message: original, 256, 512, 1024 bytes
Inter-request time: 0 (no pause), 20, 40ms
  - In order to support variable size of reply message, before the client starts invocating 10000 times for the experiment, the client lets the server know the size of the reply for the given run. The server then allocates a dummy data block of the requested size on the entity bean (thus it becomes stateful), and gives that reply when the client invokes the actual method for testing. We used String object to allocate the request data, assuming each character of the String would use 2-byte (JAVA stores the string as 2-byte unicode). Although, this may not be true, since we are interested in size of ¡°useful data¡±, this is acceptable assumption for this evaluation.
  - Each test methods are called 5000 times each to make 10000 requests per experiment
  - JAVA does not support a fine-grain measurement method, so we decided to use native method using JNI. The probe calls the native implementation of the timer which gives u-sec granularity.
  - Total of 7 probes were used in this experiments. The descriptions of each probe are in Table 2.
  - In JAVA, the size of objects cannot be easily determined, so we used simple approximations. Primitive data types are assumed to be 32-bit, and variable size of objects are represented using sizeof(object). The unit of size is ¡®byte¡¯. When getPlayerName is called for experiment, the id is always the same to fix the reply size

Table 1: List of Client Invocations
METHOD

ONE_WAY

DB_ACS

SZ_REQ

SZ_REPLY

void savePlayer(int uid, int oid, String id, String name, Integer balance )

YES

YES

12 + sizeof(id)

0

void createTable(int uid, int oid, String name )

YES

YES

8 + sizeof(name)

0

void dealPlayer(int uid, int oid, String playerID)

YES

YES

8 + sizeof(playerID)

0

Card lastDealtCard(String playerID)

NO
(2-WAY)

YES

sizeof(playerID)

12 + sizeof(type)

void dealDealer(int uid, int oid, String tableName)

YES

YES

8 + sizeof(tableName)

0

Card lastDealtDealerCard(String tableName)

NO
(2-WAY)

YES

sizeof(tableName)

12 + sizeof(type)

void joinTable(int uid, int oid, String playerID, String tableName)

YES

YES

8 + sizeof(playerID) + sizeof(tableName)

0

void placeBet(int uid, int oid, String playerID, int bet)

YES

YES

12 + sizeof(playerID)

0

void startGame(int uid, int oid, String playerID, String tableName)

YES

YES

8 + sizeof(playerID) + sizeof(tableName)

0

void adjustBalance(int uid, int oid, String playerID, int adjust)

YES

YES

12 + sizeof(playerID)

0

int readBalance(String playerID)

NO
(2-WAY)

YES

sizeof(playerID)

4

String getPlayerName(String id)

NO
(2-WAY)

YES

sizeof(id)

Variable

String getTableName(Integer uid)

NO
(2-WAY)

YES

4

Variable

Table 2: List of Probes
Probe File Name

Data

Purpose

1

DATA749_app_out_cli_${STY}_2srv_${C} cli_${IRT}us_${BYT}req_${HOST}_team${N}.txt

Time in u sec

Record the time of client invocation (request going out)

2

DATA749_app_in_cli${STY}_2srv${C} cli_${IRT}us_${BYT}req_${HOST}_team${N}.txt

Time in u sec

Record the time of receiving the reply

3

DATA749_app_msg_cli${STY}_2srv${C} cli_${IRT}us_${BYT}req_${HOST}_team${N}.txt

Name of each invocation

Keep track of the name of the invocating function to match later

4

DATA749_app_in_srv${STY}_2srv${C} cli_${IRT}us_${BYT}req_${HOST}_team${N}.txt

Time in u sec when the request is received

Record the time of request received

5

DATA749_app_out_srv${STY}_2srv${C} cli_${IRT}us_${BYT}req_${HOST}_team${N}.txt

Time in u sec when each reply is served

Record the time of request completion

6

DATA749_app_msg_srv${STY}_2srv${C} cli_${IRT}us_${BYT}req_${HOST}_team${N}.txt

Name of each invocation

Keep track of the invocation from the server

7

DATA749_app_source_srv${STY}_2srv${C} cli_${IRT}us_${BYT}req_${HOST}_team${N}.txt

Name of client invocation

Keep track of which client sent the each request

Result
 Experiment 1 - # of Client = 1, Size of Reply = original, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 2 - # of Client = 1, Size of Reply = original, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 3 - # of Client = 1, Size of Reply = original, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 4 - # of Client = 1, Size of Reply = 256, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 5 - # of Client = 1, Size of Reply = 256, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 6 - # of Client = 1, Size of Reply = 256, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 7 - # of Client = 1, Size of Reply = 512, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 8 - # of Client = 1, Size of Reply = 512, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 9 - # of Client = 1, Size of Reply = 512, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 10 - # of Client = 1, Size of Reply = 1024, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 11 - # of Client = 1, Size of Reply = 1024, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 12 - # of Client = 1, Size of Reply = 1024, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 13 - # of Client = 4, Size of Reply = original, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 14 - # of Client = 4, Size of Reply = original, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 15 - # of Client = 4, Size of Reply = original, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 16 - # of Client = 4, Size of Reply = 256, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 17 - # of Client = 4, Size of Reply = 256, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 18 - # of Client = 4, Size of Reply = 256, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 19 - # of Client = 4, Size of Reply = 512, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 20 - # of Client = 4, Size of Reply = 512, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 21 - # of Client = 4, Size of Reply = 512, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 22 - # of Client = 4, Size of Reply = 1024, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 23 - # of Client = 4, Size of Reply = 1024, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 24 - # of Client = 4, Size of Reply = 1024, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 25 - # of Client = 7, Size of Reply = original, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 26 - # of Client = 7, Size of Reply = original, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 27 - # of Client = 7, Size of Reply = original, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 28 - # of Client = 7, Size of Reply = 256, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 29 - # of Client = 7, Size of Reply = 256, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 30 - # of Client = 7, Size of Reply = 256, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 31 - # of Client = 7, Size of Reply = 512, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 32 - # of Client = 7, Size of Reply = 512, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 33 - # of Client = 7, Size of Reply = 512, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 34 - # of Client = 7, Size of Reply = 1024, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 35 - # of Client = 7, Size of Reply = 1024, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 36 - # of Client = 7, Size of Reply = 1024, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 37 - # of Client = 10, Size of Reply = original, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 38 - # of Client = 10, Size of Reply = original, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 39 - # of Client = 10, Size of Reply = original, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 40 - # of Client = 10, Size of Reply = 256, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 41 - # of Client = 10, Size of Reply = 256, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 42 - # of Client = 10, Size of Reply = 256, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 43 - # of Client = 10, Size of Reply = 512, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 44 - # of Client = 10, Size of Reply = 512, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 45 - # of Client = 10, Size of Reply = 512, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 46 - # of Client = 10, Size of Reply = 1024, Request Time = 0ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 47 - # of Client = 10, Size of Reply = 1024, Request Time = 20ms

Client Latency

Server Latency
Middle -ware Latency

Result
 Experiment 48 - # of Client = 10, Size of Reply = 1024, Request Time = 40ms

Client Latency

Server Latency
Middle -ware Latency

Outliers

Experiment 1

Experiment 2

Experiment 3

Experiment 4

Experiment 5

Experiment 6

Outliers

Experiment 7

Experiment 8

Experiment 9

Experiment 10

Experiment 11

Experiment 12

Outliers

Experiment 13

Experiment 14

Experiment 15

Experiment 16

Experiment 17

Experiment 18

Outliers

Experiment 19

Experiment 20

Experiment 21

Experiment 22

Experiment 23

Experiment 24

Outliers

Experiment 25

Experiment 26

Experiment 27

Experiment 28

Experiment 29

Experiment 30

Outliers

Experiment 31

Experiment 32

Experiment 33

Experiment 34

Experiment 35

Experiment 36

Outliers

Experiment 37

Experiment 38

Experiment 39

Experiment 40

Experiment 41

Experiment 42

Outliers

Experiment 43

Experiment 44

Experiment 45

Experiment 46

Experiment 47

Experiment 48

Observable Trends
 Number of Client = 1

Observable Trends
 Number of Client = 4

Observable Trends
 Number of Client = 7

Observable Trends
 Number of Client = 10

Systematic Unpredictability

The Magical 1%

Challenges and Problems!

  - The evaluation data has been collected using stratego as the server, and arkhamhorror, boggle, chess, clue, diplomacy, drlucky, girltalk, risk, roborally and settlers. However, the some of the machines experienced failure due to high contention, and we were forced to use mahjongg, go and othello to launch some of our clients when we were evaluation 10 client test (Experiment 38, 39, 41). (devilbunny rejected ssh connection, so it was eliminated from the list of machines we used.
  - In general, there was a lot of variability in the measurement. We believe this is due to high contention in the network since a lot of groups were running the evaluations at the same time. Our results shows that the database is a major source of bottleneck. This makes sence because all of the teams are running the database on mahjongg, and one machine is overloaded with requests.
  - One other variability comes from difference in workload in each cluster machines. We found that a machine with a common and well-known game name (e.g. chess, othello etc) have higher latency due to popularity
  - When we were running tests with 10 clients, we have experienced a significant slow down in the latency. Our timer measurement had overflowed after long time, and produced a negative latency values as result. This is shown in the plots from Experiment 37~48. You can see that in some of the cases, the latency values are completely wrong (negative). We think this happened because of timer overflowed. However, towards the end of the experiment, some of the cluster machines were not available to use, and the database server got killed, so we could not run addtional tests to collect the correct data.
   
   
   
   
   
   
   

Interpretation of the Result

   
  - We found 2 highest outliers in each experiment, and plotted the graph to show the break-down of the source of variability. In general, the database takes up the major portion of the client latency. However, we also found that this is not the case in all of the cases. In some of the cases, the middleware took a considerably longer than the database. In conclusion, the variabilty comes from a lot of sources, and it is hard to come up with deterministic way to predict the behavior of multi-machine network
  - If you look at the Observable Trends, you can see that as the reply size becomes large, the latency grows. However, the inter-request time did not have much impact on the performance. We think this is because the way that JAVA handles the thread scheduling is not very accurate and does not garantee the amount of sleep time we requested. This trends can be seen in all of the cases with different number of clients. (1,4,7,10). As the number of client increases, the latency increases more rapidly.
  - The Magical 1% graph confirms the data from the lecture. We had some outliers that made the interpretation of the data difficult, but after filtering out top 1% of the outliers, the data seems a lot more consistent and shows some correlations with varying parameters