Resource availability
Depending on user enthusiasm and contribution we can have several hardware setups available during Proof Of Concept phases. Initially we will start with a 5 node cluster including CEPH backend storage. It will serve the purpose of finding out in what way and to what extend a private community base can contribute to this research project. For now, all deployed instances will have to be small as the setup is designed to test and look for the best workflow approaches. Changes to the environment can be made during the proof of concept period.

First POC setup
The first Proof of Concept environment will run OpenStack Mitaka and CEPH.
- 1 Xeon E3 Hypervisor – 32GB
- Controller node
- 4 Core / 24GB
- 1TB Glance image storage
- 2 x 1 GbE connectivity
- Network node
- 2 Core / 4GB
- 3 x 1 GbE connectivity
- Controller node
- 4 x compute nodes with a total of 16 cores / 32GB
- Intel Quad core (4C/4T) / 8GB
- 250GB SSD in RAID 1
- 3 TB Ceph OSD
Second POC setup
Second POC environment will run a later version of OpenStack and CEPH. Including extra functionality and architecture requested by users.
- 1 Xeon E5 Hypervisor 8 core – 128GB
- Controller node
- 6 Core / 24GB
- 2 x 1 GbE
- Glance imaging node
- 4 Core / 32GB
- 6 TB image storage
- 3 x Trunked GbE
- Network node
- 4 Core / 4GB
- 3 x 1 GbE connectivity
- Controller node
- 4 compute nodes with a total of 96 cores / 512GB
- 24 AMD cores (24C/24T) @ 2.4Ghz
- 128GB
- 250GB SSD in RAID 1
- 3 TB Ceph OSD

Future
There is the potential to expand as user demand grows.