Hi Javier,
I haven’t seen any direct guidelines for ATLAS but here is some information you might find useful. As far as I’ve seen, ATLAS seems to be pretty well designed with most functionality embedded in API calls as opposed to the web app itself. This means that most of what you’ll need to worry about is user requirements related to time to return the results of a query. There’s a useful (though dated) post here with some helpful info: Hardware specs to run OHDSI technology stack
The key piece of info I took away was that disk i/o and db indexing seem to be the key factors to speeding up ATLAS performance. One of the teams set up a dell poweredge R730XD with 768GB of memory and a beefy SSD setup. From the post: “Multiple NVMe SSD or multiple SSD with RAID 5 with 12Gbps interface can dramatically reduce the time for disk I/O.” This took queries down from >1h runtime with presumably a typical server setup to 17 seconds. You’ll need to consider what an acceptable wait time for query results will be for users and match that to your budget.
If you’re using RStudio with ATLAS, it’s typically deployed on its own server. The team seems to recommend about 0.75GB of memory per anticipated concurrent user for the R server.
I hope that helps!