Due to the increased, casual use of ML model, we have decided to scale-up & scale-out server instances (c.f. changelog).
The former environment failed in withstanding daily, undemanding tasks such as
- performing inference with a partially trained (generally recognized small) ML[1]Machine Learning model of size up to few hundreds megabytes;
- persist indices of few megabytes text, receipt from outside generator application.
Also, we have expansively refined server maintenance utilities along with the optimized architectures of network and data pipelines.
Footnotes
↑1 | Machine Learning |
---|