All Collections
Release Notes
Langite 2023
Performance Test Status - Langite'23
Performance Test Status - Langite'23
Pavithra avatar
Written by Pavithra
Updated over a week ago

Performance testing is a critical component of our software testing efforts, allowing us to evaluate how our systems and applications perform under diverse conditions. In this report, we'll provide an overview of the testing methodology and results obtained for various scenarios.

Load Testing: Understanding the Variables πŸ“Š

Load testing is an integral part of performance testing, enabling us to assess how our systems behave under heavy load. To understand this type of testing, we must consider three essential variables:

  1. Concurrent Users πŸ§‘β€πŸ’»πŸ§‘β€πŸ’»πŸ§‘β€πŸ’»: The number of users online simultaneously.)

  2. Scenario Completion Time ⏱️: The allowed duration (in seconds) for a scenario to finish.

  3. Test Iterations πŸ”: The number of times the test is repeated or looped.

Methodology πŸ“ˆ

Our testing methodology involved the collection of critical performance metrics, including Total Samples, Average Response Time, Throughput, and Received KB/sec. We gathered these results using sophisticated testing tools and data collection mechanisms.
​
​Definition for Each of These Variables πŸ”

Test Scenarios πŸ§ͺ

We conducted performance testing on two types of scenarios:

Simple Full Scenarios

  • Designed to handle 25 users within 10 seconds for 1 loop.

  • Focused on end-to-end scenarios, including screen access with relatively lighter loads.

  • Results:

    • Response times in these scenarios were acceptable, indicating the system's capability to handle moderate loads efficiently.

    • Error rates remained within a reasonable range, reflecting system stability under these conditions.

    • Throughput was higher, suggesting the system's capacity to manage a substantial number of users simultaneously.

Heavy Full Scenarios

  • Configured to accommodate 20 users within 10 seconds for 1 loop.

  • Focused on end-to-end scenarios involving access screens, loading extensive data, and applying heavy load.

  • Results:

    • Response times were slightly higher due to more extensive data processing and heavier loads.

    • Error rates remained relatively low, indicating system stability under increased loads.

    • Throughput was slightly lower compared to the Simple Full Scenarios, primarily due to the heavier loads and data processing.

Current Release Performance Testing Status πŸ’»πŸ’ͺ

The charts and tables below show the performance results of brains. app through different releases:

Samples vs Average Response Time:

  • Sample Count:

    • The number of samples increased in the second half of 2019 compared to the first half of 2018, followed by a decrease in the second half of 2022 compared to the second half of 2019. Subsequently, there was another increase in the first half of 2023 compared to the second half of 2022, followed by a decrease in the second half of 2023.

    • The sample count variations may indicate changes in testing frequency or load.

  • Average Response Time:

    • The average response time experienced significant turn over the years.

    • There were spikes in response time during the second half of 2019 and the first half of 2020. However, there were significant improvements observed in the first half of 2021, which continued into the second half of 2022, and further enhancements were recorded in both the first and second halves of 2023.

    • The data suggests efforts to optimize response times, with significant improvements in 2023-H2 compared to earlier periods.

It's important to consider the context and goals of the performance testing when explaining these metrics. The enhancements in average response time during 2023-H2 suggest that optimization efforts have likely been implemented to improve system performance.

Requests Per Second vs Received Kb/sec:

  • Requests per Second:

    • The requests per second indicate varying loads and testing frequencies over the years.

    • Notable peaks occurred in 2021-H1 and 2023-H1, signifying periods of increased request rates.

  • Received Data Kb/Sec:

    • The rate of received data saw a significant increase from 2018-H1 to 2019-H2, suggesting enhanced data processing or transfer activities.

    • This rate reached its peak in 2023-H1 before decreasing in 2023-H2.

    • This comparative analysis underscores the variation in both requests per second and received data rates over the years. These variations likely align with shifts in system load.

      2023-H2 Compared to Previous Periods:

      • In the 2nd half of 2023, a decline in sample count and requests per second compared to the 1st half of 2023, possibly points to reduced testing demands.

      • However, the 2nd half of 2023 showed remarkable improvements in average response time compared to both the 1st half of 2023 and the 2nd half of 2022.

      • The received data rate decreased in 2nd half of 2023 compared to 1st half of 2023 but remained higher than 2nd half of 2022.

Did this answer your question?