View Issue Details
ID | Project | Category | View Status | Date Submitted | Last Update |
---|---|---|---|---|---|
0005579 | Composr | core | public | 2024-01-28 02:59 | 2024-03-30 15:13 |
Reporter | Patrick Schmalstig | Assigned To | Patrick Schmalstig | ||
Severity | Feature-request | ||||
Status | resolved | Resolution | fixed | ||
Product Version | |||||
Fixed in Version | |||||
Summary | 0005579: Update SLOW_SERVER / relative performance scores | ||||
Description | In Composr, we have relative performance calculations against "lead developer's machine". But Chris is no longer the lead developer, and these specs are outdated. Instead, change this system to an absolute score and use a "reference machine". Ideally, we should modify the logic / calculations to something already commonly used online so that we can then find the score of a machine whose specs closely match Composr's recommendations. But if this cannot be done, just have documentation of what different machines scored. | ||||
Tags | Roadmap: v11 | ||||
Time estimation (hours) | |||||
Sponsorship open | |||||
|
Normative performance was changed to an absolute performance. * This is calculated by processing 10,000 MD5 operations on uniqid and calculating the amount of time it takes. * Then, based on that time, we calculate the average amount of operations that could be performed in 1 second. And this is the final score. * For example, a score of 900,000 means the server can perform about 900,000 MD5 hashes on uniqids in one second. Based on this new calculation, the 2014 iMac referenced in v10 would have scored about 180,000. Our warning for normative performance was set at 4%, which is about 8,000. In v11, I think we should bump the threshold up to 25,000 as it requires more resources than v10 did. The threshold can be configured in Health Check. I may consider raising it even higher for the default as some slow servers struggle immensely with generating cache but do fine when cache is available. |
|
Consider improving this further, perhaps base it off of how long it takes to load the Admin Dashboard (uncached). Although this metric could be inaccurate as the dashboard contents may differ between sites. But it would be better than the MD5 metric... I have a case where two server stacks of mine both score over 900,000 but one loads pages significantly slower than the other. The issue though is we cross into page speed territory. This is a different metric and is already monitored separately by the Health Check. So CPU speed might need to remain as it is, based on hard calculations instead of web requests. |
Date Modified | Username | Field | Change |
---|---|---|---|
2024-01-28 02:59 | Patrick Schmalstig | New Issue | |
2024-01-28 02:59 | Patrick Schmalstig | Status | non-assigned => assigned |
2024-01-28 02:59 | Patrick Schmalstig | Assigned To | => Patrick Schmalstig |
2024-01-28 02:59 | Patrick Schmalstig | Tag Attached: Roadmap: v11 | |
2024-03-30 15:01 | Patrick Schmalstig | Summary | v11: Update SLOW_SERVER / relative performance scores => Update SLOW_SERVER / relative performance scores |
2024-03-30 15:03 | Patrick Schmalstig | Status | assigned => resolved |
2024-03-30 15:03 | Patrick Schmalstig | Resolution | open => fixed |
2024-03-30 15:03 | Patrick Schmalstig | Note Added: 0008500 | |
2024-03-30 15:06 | Patrick Schmalstig | Note Edited: 0008500 | View Revisions |
2024-03-30 15:07 | Patrick Schmalstig | Note Edited: 0008500 | View Revisions |
2024-03-30 15:11 | Patrick Schmalstig | Note Added: 0008501 | |
2024-03-30 15:12 | Patrick Schmalstig | Note Edited: 0008501 | View Revisions |
2024-03-30 15:13 | Patrick Schmalstig | Note Edited: 0008501 | View Revisions |