Configure LOCALCORE
LOCALCORE represents tailor-made versatility in the world of artificial intelligence, designed to meet even the most demanding requirements. LOCALCORE scales on different hardwares due to the innovative AI-enterprise-architecture. It can be operated on the on-premise servers of companies but is best suited for specialized high-performance hardware configurations. It processes a high number of user requests simultaneously and handle enormous amounts of data efficiently.
​
Each variant is optimized to maximize these core features by offering robust compute capabilities and extensive storage. Whether you work in a data-intensive environment or need an infrastructure that supports numerous parallel requests without sacrificing performance, LOCALCORE provides the flexibility and scalability needed to grow and adapt with your needs.
​
Discover a new level of efficiency and productivity with LOCALCORE, specifically designed to support high performance requirements in any enterprise environment.
Core 1
Core 1 is a workstation configuration specifically designed for users who require high performance while processing large amounts of data and handling numerous requests simultaneously. With two Nvidia GPUs and supported by the LocalCore software, this workstation is perfect for many AI applications and data-rich environments.
​
Ideal for:
-
Various AI applications
-
Simultaneous processing of multiple requests (up to 70)
​
Available language models:
-
Mixtral-8x7B-Instruct
-
Llama-3-8B-Instruct
-
Mistral-7B-Instruct
Core 2
Core 2 offers a robust server rack solution for companies that need a powerful hardware configuration, but perhaps do not need the full GPU capacity of Core 3. Equipped with two Nvidia GPUs and sufficient memory, this configuration also offers excellent conditions for data-intensive applications.
​
Ideal for:
-
Many different AI applications
-
Simultaneous processing of multiple requests (256+)
​
Available language models:
-
Llama-3-70B-Instruct
-
Mixtral-8x7B-Instruct
-
Llama-3-8B-Instruct
-
Mistral-7B-Instruct
Core 3
Core 3 is our most powerful server rack, designed for companies that have high demands on the simultaneous processing of numerous user requests and large amounts of data. Equipped with four Nvidia GPUs and extensive memory, Core 3 offers exceptional computing power and fast data processing, supported by the advanced LocalCore software.
​
Ideal for:
-
High-performance AI applications
-
Simultaneous processing of numerous user requests (256+)
-
Efficient handling of large amounts of data
​
Available language models:
-
Mixtral-8x22B-Instruct
-
Llama-3-70B-Instruct
-
Mixtral-8x7B-Instruct
-
Llama-3-8B-Instruct
-
Mistral-7B-Instruct