Infrastructure

Available resources

Depending on user enthusiasm and contribution we can have several hardware setups available during POC phases. Initially we will start with a 5 compute node cluster with CEPH as backend block storage. It will serve the purpose of finding out in what way and on what level a private community base can contribute to research project and students.

POC phase 1 setup

This will be the first Proof of Concept environment that will be running OpenStack Mitaka with CEPH.

  • 1 Xeon E3 Hypervisor – 32GB
    • Controller node
      • 4 Core / 24GB
      • 1TB Glance image storage
      • 2 x 1 GbE connectivity
    • Network node
      • 2 Core / 4GB
      • 3 x 1 GbE connectivity

 

  • 4 x Quad core compute nodes
    • 3 compute nodes run CEPH monitor
      • 4 Quad cores
      • 8GB
      • 250GB SSD in RAID 1
      • 3 TB Ceph OSD

POC phase 2 setup

Later on in the project we will offer a much bigger and faster platform for our final proof of concept.

  • 1 Xeon E3 Hypervisor – 32GB
    • Controller node
      • 4 Core / 24GB
      • 1TB Glance image storage
      • 2 x 1 GbE connectivity
    • Network node
      • 2 Core / 4GB
      • 3 x 1 GbE connectivity
  • 4 x Quad core compute nodes
    • 3 compute nodes run CEPH monitor
      • 4 Quad cores
      • 8GB
      • 250GB SSD in RAID 1
      • 3 TB Ceph OSD