-
Notifications
You must be signed in to change notification settings - Fork 174
Test Design of xcat terraform provider phase1
groups=__TFPOOL-FREE
- Source code of xCAT Terraform Provider
- xCAT Terraform Provider quick start
- Minidesign of xcat Terraform Provider
- How to apply and orchestrate compute instances from a xCAT cluster with Terraform
- Terraform official portal
- Some attributes definition of node in xCAT DB
disksize: The size of the disks for the node in GB. memory: The size of the memory for the node in MB. cputype: The cpu model name for the node. cpucount: The number of cpus for the node. rack: The frame the node is in. room: The room where the node is located. unit: The vertical position of the node in the frame
- Terraform downloading(x86_64 version will download from Terraform offical web, ppcle version will be downloaded from xcat.org)
- xCAT Terraform Provider downloading(Both x86_64 and ppcle version will be downloaded from xcat.org)
- xCAT API service(There are pip and container version finally, But in 2.15, just recommend to use pip version)
- xCAT(There are rpm and container version, in 2.15, just recommend to use rpm version)
- There is no user authentication in 2.15. The terraform user can operate xCAT API service directly for other purpose. Authentication will be covered in later xCAT version.
- In 2.15, Terraform user have to use xcat restapi to get osimage and node list in xCAT DB.
- In 2.15, xCAT Terroform Provider just support
==
and!=
in selectors, does not support>=
<=
and other avanced matching method. - In 2.15, recommend to install xCAT and xCAT API service in the same node.
product component | location | ip | remark |
---|---|---|---|
xCAT Terraform provider | node1 | 10.x.x.1 | binary used directly |
xCAT API service | node2 | 10.x.x.2 | pip installation |
xCAT | node2 | 10.x.x.2 | rpm installation |
nodes poll | 2 real nodes | 10.x.x.4-5 | need to cover provision test |
All configuration about xCAT cluster must be done ahead.
-
xCAT installation
-
Create the xCAT Terraform Provider user account in
password
table on xCAT MN ahead by admin
# chtab key=xcat passwd.username=xtpu1 passwd.password=12345
# chtab key=xcat passwd.username=xtpu2 passwd.password=12345
-
Use above account
xtpu1
andxtpu2
to access xCAT API service to apply token, remember these tokens. -
Define all kinds of node xCAT Terraform Provider supported in xCAT DB ahead. At least there are two real nodes to test provision in parallel. The other nodes can be bogus nodes.
# chdef xtpbogusp9phyn1 groups=free usercomment=",ib=0,gpu=0" mtm=8335-GTC arch=ppc64le disksize=200 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.1 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.01 room="r1" rack="rC" unit="u1" status=powering-on
# chdef xtpbogusp9phyn2 groups=free usercomment=",ib=0,gpu=0" mtm=8335-GTB arch=ppc64le disksize=200 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.2 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.02 room="r1" rack="rB" unit="u2" status=powering-on
# chdef xtpbogusp9phyn3 groups=free usercomment=",ib=1,gpu=0" mtm=8335-GTB arch=ppc64le disksize=200 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.3 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.03 room="r1" rack="rB" unit="u3" status=powering-on
# chdef xtpbogusp9phyn4 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=200 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.4 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.04 room="r1" rack="rB" unit="u4" status=powering-on
# chdef xtpbogusp9phyn5 groups=free usercomment=",ib=1,gpu=1" mtm=8335-GTB arch=ppc64le disksize=200 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.5 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.05 room="r1" rack="rB" unit="u5" status=powering-on
# chdef xtpbogusp9phyn6 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=300 memory=256 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.6 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.06 room="r1" rack="rB" unit="u6" status=powering-on
# chdef xtpbogusp9phyn7 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=400 memory=128 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.7 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.07 room="r1" rack="rB" unit="u7" status=powering-on
# chdef xtpbogusp9phyn8 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=400 memory=64 cputype="POWER9 (raw), altivec supported" cpucount=32 ip=100.50.20.8 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.08 room="r1" rack="rB" unit="u8" status=powering-on
# chdef xtpbogusp9phyn9 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=400 memory=64 cputype="POWER8 (raw), altivec supported" cpucount=20 ip=100.50.20.9 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.09 room="r1" rack="rB" unit="u9" status=powering-on
# chdef xtpbogusp9phyn10 groups=free usercomment=",ib=0,gpu=1" mtm=8335-GTB arch=ppc64le disksize=400 memory=64 cputype="POWER9 (raw), altivec supported" cpucount=16 ip=100.50.20.10 cons=openbmc mgt=openbmc netboot=petitboot profile=compute mac=E9.83.35.EB.20.10 room="r1" rack="rB" unit="u10" status=powering-on
# chdef xtpbogusx86phyn1 groups=free usercomment=",ib=0,gpu=0" mtm=7912AC1 arch=x86_64 disksize=300 memory=64 cputype=" Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz" cpucount=16 ip=100.50.30.01 mac=86.79.12.C1.30.01 room="r2" rack="rAC1" unit="u1" status=powering-on cons=ipmi mgt=ipmi netboot=xnba profile=compute
# chdef xtpbogusx86phyn2 groups=free usercomment=",ib=1,gpu=0" mtm=7912AC2 arch=x86_64 disksize=400 memory=128 cputype=" Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz" cpucount=32 ip=100.50.30.02 mac=86.79.12.C2.30.02 room="r2" rack="rAC1" unit="u2" status=powering-on cons=ipmi mgt=ipmi netboot=xnba profile=compute
# chdef xtpbogusx86phyn3 groups=free usercomment=",ib=1,gpu=0" mtm=7912AC2 arch=x86_64 disksize=400 memory=128 cputype=" Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz" cpucount=32 ip=100.50.30.03 mac=86.79.12.C2.30.03 room="r2" rack="rAC1" unit="u3" status=powering-on cons=ipmi mgt=ipmi netboot=xnba profile=compute
//below 2 nodes should be real node. need to prepare ahead
# chdef xtprealp8vm1 groups=free usercomment=',ib=0,gpu=0" mtm=8335-GTA arch=ppc64le cons=ipmi mgt=kvm netboot=grub2 profile=compute vmhost=xxx mac=x.x.x.x.x.x ip=x.x.x.x
# chdef xtprealx86vm1 groups=free usercomment=",ib=0,gpu=0" mtm=7912AC3 arch=x86_64 cons=kvm mgt=kvm netboot=xnba profile=compute vmhost=xxx mac=x.x.x.x.x.x ip=x.x.x.x
- Install xCAT Terraform Provider in
node1
. xCAT Terraform Provider installation will involve a operating system user,terraform init
will find provider binary under this user's home directory/<user_home>/.terraform.d/plugins/
. In this test, suppose this OS user isroot
.
// login node1 by root
//Terraform ppcle version
$ wget https://media.github.ibm.com/releases/207181/files/158261?token=AABlEjj4dE6g_afKtyCL0TTcD8gGrNE9ks5c3OiqwA%3D%3D -O /usr/bin/terraform
$ chmod +x /usr/bin/terraform
//Terraform x86 version
# mkdir /tmp/terraform
# wget https://releases.hashicorp.com/terraform/0.11.13/terraform_0.11.13_linux_amd64.zip -P /tmp/terraform
# cd /tmp/terraform && unzip terraform terraform_0.11.13_linux_amd64.zip
# mv /tmp/terraform/terraform /usr/local/sbin/
# rm -rf /tmp/terraform
//xCAT Terraform Provider ppc64le version
$ wget https://media.github.ibm.com/releases/207181/files/158263?token=AABlElBlpu3Q8UGn3xJBlrHbN60nKizLks5c3Qq4wA%3D%3D -O /root/.terraform.d/plugins/terraform-provider-xcat
$ chmod +x /root/.terraform.d/plugins/terraform-provider-xcat
- install xCAT API service in node2 depending on steps
Basic function test include customer apply, update, free compute instance and get configuration of each compute instance.
- Create work directory
$ mkdir -p /terraform_test/xtpu1/task1 && cd /terraform_test/xtpu1/task1
- Create below
xcat.tf
files under/terraform_test/xtpu1/task1
#cat /terraform_test/xtpu1/task1/xcat.tf
provider "xcat" {
url = "<the access url of xCAT API service>"
username = "xtpu1"
password = "12345"
}
- init terraform for this task
# terraform init
- Create
node.tf
files/terraform_test/xtpu1/task1/node.tf
resource "xcat_node" "ppc64lenode" {
selectors {
arch="ppc64le"
machinetype="8335-GTB"
gpu=0
ib=1
disksize>=300
memory<=128
cputype="POWER9 (raw), altivec supported"
cpucount<=20
}
count=1
}
resource "xcat_node" "x86node" {
selectors {
arch="x86_64"
machinetype="7912AC2"
disksize==400
memory==128
cputype="Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz"
cpucount==32
}
count=2
}
output "x86nodes" {
value=[
"${xcat_node.x86node.*.name}"
]
}
output "ppc64lenodes" {
value=[
"${xcat_node.ppc64lenode.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- terraform plan/apply
$ terraform apply
- Expect return
xtpbogusp9phyn10
,xtpbogusx86phyn2
andxtpbogusx86phyn3
. - Expect All below 3 nodes have attribute defined in xCAT DB. like below
//xtpbogusp9phyn10
mac=E9.83.35.EB.20.10 room="r1" rack="rB" unit="u10"
//xtpbogusx86phyn2
ip=100.50.30.02 mac=86.79.12.C2.30.02 room="r2" rack="rAC1" unit="u2"
//xtpbogusx86phyn3
ip=100.50.30.03 mac=86.79.12.C2.30.03 room="r2" rack="rAC1" unit="u3"
- Expect the value of
groups
attribute ofxtpbogusp9phyn10
,xtpbogusx86phyn2
andxtpbogusx86phyn3
in xcat DB have been changed fromfree
toxtpu1
. - Expect the value of
groups
attribute of the rest nodes in node pool still isfree
- Create work directory
$ mkdir -p /terraform_test/xtpu1/task2 && cd /terraform_test/xtpu1/task2
- Create below
xcat.tf
files under/terraform_test/xtpu1/task2/
#cat /terraform_test/xtpu1/task2/xcat.tf
provider "xcat" {
url = "<the access url of xCAT API service>"
username = "xtpu1"
password = "12345"
}
- init terraform for this task
# terraform init
- Create below
node.tf
files under/terraform_test/xtpu1/task2/
#cat /terraform_test/xtpu1/task2/node.tf
resource "xcat_node" "ppc64lenode" {
selectors {
arch="ppc64le"
machinetype="8335-GTA"
}
count=1
osimage="rhels7.6-ppc64le-install-compute"
powerstatus=on
}
resource "xcat_node" "x86node" {
selectors {
arch="x86_64"
machinetype="7912AC3"
}
count=1
osimage="rhels7.4-x86_64-netboot-compute"
powerstatus=off
}
output "x86nodes" {
value=[
"${xcat_node.x86node.*.name}"
]
}
output "ppc64lenodes" {
value=[
"${xcat_node.ppc64lenode.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- terraform plan/apply
$ terraform plan
$ terraform apply
- Expect return
xtprealp8vm1
andxtprealx86vm1
. - Expect the value of group attribute of
xtprealp8vm1
andxtprealx86vm1
in xcat DB have been changed fromfree
to user namextpu1
. - Expect
xtprealp8vm1
is pingable andxtprealx86vm1
is not pingable. -
xtprealp8vm1
will be installed rhels7.6 - Use ip, username and password returned by
apply
, can loginxtprealp8vm1
. - Change
node.tf
to power onxtprealx86vm1
resource "xcat_node" "x86node" {
selectors {
arch="x86_64"
machinetype="7912AC3"
}
count=1
osimage="rhels7.4-x86_64-netboot-compute"
powerstatus=on
}
- Expect
xtprealx86vm1
will be powered on and os is rhels7.6(OS won't be reinstalled in this time) - Expect the value of groups attribute of xtprealx86vm1 and xtprealp8vm1 in xcat DB have been changed from free to xtpu1.
- Create work directory and init terraform
$ mkdir -p /terraform_test/xtpu1/task3 && cd /terraform_test/xtpu1/task3 && terraform init
- Create below xcat.tf files under /terraform_test/xtpu1/task3/
#cat /terraform_test/xtpu1/task3/xcat.tf
provider "xcat" {
url = "<the access url of xCAT API service>"
username = "xtpu1"
password = "12345"
}
- init terraform for this task
# terraform init
- Create below
node.tf
files under/terraform_test/xtpu1/task3
resource "xcat_node" "ppc64lenode" {
selectors {
arch="ppc64le"
machinetype="8335-GTF"
}
count=1
}
output "ppc64lenodes" {
value=[
"${xcat_node.ppc64lenode.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- terraform plan/apply
$ terraform plan
$ terraform apply
- Expect apply failure.
- clear up node pool in xcat mn in node3
$ chdef free groups=all
- login node1 by root
- Create work directory
$ mkdir -p /terraform_test/xtpu1/task4 && cd /terraform_test/xtpu1/task4
- Create below xcat.tf files under
/terraform_test/xtpu1/task4/
#cat /terraform_test/xtpu1/task3/xcat.tf
provider "xcat" {
url = "<the access url of xCAT API service>"
username = "xtpu1"
password = "12345"
}
- init terraform for this task
# terraform init
- Create below
node.tf
files under/terraform_test/xtpu1/task4/
resource "xcat_node" "ppc64lenode" {
selectors {
arch="ppc64le"
machinetype="8335-GTB"
}
count=1
}
output "ppc64lenodes" {
value=[
"${xcat_node.ppc64lenode.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- terraform plan/apply
$ terraform plan
$ terraform apply
- Expect apply failure.
- repeat steps in case1 by user
xtpu1
. - Create work directory for
xtpu2
$ mkdir -p /terraform_test/xtpu2/task1 && cd /terraform_test/xtpu2/task1
- Create below xcat.tf files under
/terraform_test/xtpu2/task1
#cat /terraform_test/xtpu2/task1/xcat.tf
provider "xcat" {
url = "<the access url of xCAT API service>"
username = "xtpu2"
password = "12345"
}
- init terraform for this task
# terraform init
- apply below node
xtpbogusp9phyn10
by userxtpu2
resource "xcat_node" "ppc64lenode" {
selectors {
hostnmae="xtpbogusp9phyn10"
}
count=1
}
output "ppc64lenodes" {
value=[
"${xcat_node.ppc64lenode.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- Expect apply failure.
- repeat steps in apply case1 by user
xtpu1
. - change the node.tf files
/terraform_test/xtpu1/task1/node.tf
resource "xcat_node" "x86node" {
selectors {
arch="x86_64"
machinetype="7912AC2"
disksize==400
memory==64
cputype="Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz"
cpucount==32
}
count=1
}
output "x86nodes" {
value=[
"${xcat_node.x86node.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- Expect return
xtpbogusx86phyn1
. - Expect the value of groups attribute of
xtpbogusx86phyn1
is set toxtpu1
, the value of groups attribute of xtpbogusp9phyn10, xtpbogusx86phyn2 and xtpbogusx86phyn3 in xcat DB have been changed back to free.
- repeat 1-5 steps of apply case2
- change
node.tf
to
#cat /terraform_test/xtpu1/task2/node.tf
resource "xcat_node" "ppc64lenode" {
selectors {
arch="ppc64le"
machinetype="8335-GTA"
}
count=1
osimage="rhels7.5-ppc64le-install-compute"
powerstatus=on
}
resource "xcat_node" "x86node" {
selectors {
arch="x86_64"
machinetype="7912AC3"
}
count=1
osimage="rhels7.4-x86_64-netboot-compute"
powerstatus=off
}
output "x86nodes" {
value=[
"${xcat_node.x86node.*.name}"
]
}
output "ppc64lenodes" {
value=[
"${xcat_node.ppc64lenode.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- Expect still return xtprealp8vm1 and xtprealx86vm1.
- Expect the os of xtprealp8vm1 changed from rhels7.6 to rhels7.5
- Expect nothing changed against xtprealx86vm1
- repeat 1-5 steps of apply case2
- change
node.tf
to
#cat /terraform_test/xtpu1/task2/node.tf
resource "xcat_node" "ppc64lenode" {
selectors {
arch="ppc64le"
machinetype="8335-GTA"
}
count=1
osimage="rhels7.6-ppc64le-install-compute"
powerstatus=off
}
resource "xcat_node" "x86node" {
selectors {
arch="x86_64"
machinetype="7912AC3"
}
count=1
osimage="rhels7.4-x86_64-netboot-compute"
powerstatus=on
}
output "x86nodes" {
value=[
"${xcat_node.x86node.*.name}"
]
}
output "ppc64lenodes" {
value=[
"${xcat_node.ppc64lenode.*.name}"
]
}
output "login_credential" {
value="username: root; password: cluster"
}
- Expect still return xtprealp8vm1 and xtprealx86vm1.
- Expect power status of xtprealp8vm1 change to off
- Expect power status of xtprealx86vm1 change to on
- The rest thing do not change
- repeat 1-5 steps of apply case2.
- Touch some files on
xtprealp8vm1
andxtprealx86vm1
- destroy resource
# Terraform destroy
- Expect
xtprealp8vm1
andxtprealx86vm1
have been free - Expect the values of
groups
attribute ofxtprealp8vm1
andxtprealx86vm1
in xcat DB have been changed back tofree
- Expect the files touched on
xtprealp8vm1
andxtprealx86vm1
will gone. That means can not leak the information of last user.
- repeat apply test case 5, but this time expect apply successfully
- If different user apply and destroy node resource many times in parallel, to check if node resource will leak.(i.e. the nodes in
free
group of xcat db become less and less abnormally.
Due to xCAT need a high right user to operate xcatd, need to make sure user management solution of xCAT Terraform won't result in leak high right user information to end user. I.e. end user can get high right user's user name, password, token, certificate and so on. End user can leverage these information to operate xcatd directly or change confidential configuration.
Every user should have their own work space, to check:
- If one user has way to access other user's resource (tf files, state files, configuration files......)
- If one user has way to operate the node applied by other user.
- If there is confidential information (password, token, certificate) existed in files end user can access.
- If there is confidential information (password, token, certificate) hard code in source code
- If customer A apply a group of nodes, customer B apply another group of nodes. If it is possible for customer A to access node in group B through the node A applied without any authorization. (xcat mn can login any node deployed by it without password).
If user has nothing at first, if it is easy for user to set up xCAT Terraform product.
- If there is detailed setup steps (doc)
- If the steps is correct and easy to follow.
[Case 2] If it is easy for user to set up xCAT Terraform product based on one existed xCAT mn. (manually)
If user has had xCAT mn already, is it easy for user to integrate xCAT Terraform product with it.
- If there is detailed setup steps (doc)
- If the steps is correct and easy to follow.
- Nov 13, 2024: xCAT 2.17 released.
- Mar 08, 2023: xCAT 2.16.5 released.
- Jun 20, 2022: xCAT 2.16.4 released.
- Nov 17, 2021: xCAT 2.16.3 released.
- May 25, 2021: xCAT 2.16.2 released.
- Nov 06, 2020: xCAT 2.16.1 released.
- Jun 17, 2020: xCAT 2.16 released.
- Mar 06, 2020: xCAT 2.15.1 released.
- Nov 11, 2019: xCAT 2.15 released.
- Mar 29, 2019: xCAT 2.14.6 released.
- Dec 07, 2018: xCAT 2.14.5 released.
- Oct 19, 2018: xCAT 2.14.4 released.
- Aug 24, 2018: xCAT 2.14.3 released.
- Jul 13, 2018: xCAT 2.14.2 released.
- Jun 01, 2018: xCAT 2.14.1 released.
- Apr 20, 2018: xCAT 2.14 released.
- Mar 14, 2018: xCAT 2.13.11 released.
- Jan 26, 2018: xCAT 2.13.10 released.
- Dec 18, 2017: xCAT 2.13.9 released.
- Nov 03, 2017: xCAT 2.13.8 released.
- Sep 22, 2017: xCAT 2.13.7 released.
- Aug 10, 2017: xCAT 2.13.6 released.
- Jun 30, 2017: xCAT 2.13.5 released.
- May 19, 2017: xCAT 2.13.4 released.
- Apr 14, 2017: xCAT 2.13.3 released.
- Feb 24, 2017: xCAT 2.13.2 released.
- Jan 13, 2017: xCAT 2.13.1 released.
- Dec 09, 2016: xCAT 2.13 released.
- Dec 06, 2016: xCAT 2.9.4 (AIX only) released.
- Nov 11, 2016: xCAT 2.12.4 released.
- Sep 30, 2016: xCAT 2.12.3 released.
- Aug 19, 2016: xCAT 2.12.2 released.
- Jul 08, 2016: xCAT 2.12.1 released.
- May 20, 2016: xCAT 2.12 released.
- Apr 22, 2016: xCAT 2.11.1 released.
- Mar 11, 2016: xCAT 2.9.3 (AIX only) released.
- Dec 11, 2015: xCAT 2.11 released.
- Nov 11, 2015: xCAT 2.9.2 (AIX only) released.
- Jul 30, 2015: xCAT 2.10 released.
- Jul 30, 2015: xCAT migrates from sourceforge to github
- Jun 26, 2015: xCAT 2.7.9 released.
- Mar 20, 2015: xCAT 2.9.1 released.
- Dec 12, 2014: xCAT 2.9 released.
- Sep 5, 2014: xCAT 2.8.5 released.
- May 23, 2014: xCAT 2.8.4 released.
- Jan 24, 2014: xCAT 2.7.8 released.
- Nov 15, 2013: xCAT 2.8.3 released.
- Jun 26, 2013: xCAT 2.8.2 released.
- May 17, 2013: xCAT 2.7.7 released.
- May 10, 2013: xCAT 2.8.1 released.
- Feb 28, 2013: xCAT 2.8 released.
- Nov 30, 2012: xCAT 2.7.6 released.
- Oct 29, 2012: xCAT 2.7.5 released.
- Aug 27, 2012: xCAT 2.7.4 released.
- Jun 22, 2012: xCAT 2.7.3 released.
- May 25, 2012: xCAT 2.7.2 released.
- Apr 20, 2012: xCAT 2.7.1 released.
- Mar 19, 2012: xCAT 2.7 released.
- Mar 15, 2012: xCAT 2.6.11 released.
- Jan 23, 2012: xCAT 2.6.10 released.
- Nov 15, 2011: xCAT 2.6.9 released.
- Sep 30, 2011: xCAT 2.6.8 released.
- Aug 26, 2011: xCAT 2.6.6 released.
- May 20, 2011: xCAT 2.6 released.
- Feb 14, 2011: Watson plays on Jeopardy and is managed by xCAT!
- xCAT OS And Hw Support Matrix
- Oct 22, 2010: xCAT 2.5 released.
- Apr 30, 2010: xCAT 2.4 is released.
- Oct 31, 2009: xCAT 2.3 released. xCAT's 10 year anniversary!
- Apr 16, 2009: xCAT 2.2 released.
- Oct 31, 2008: xCAT 2.1 released.
- Sep 12, 2008: Support for xCAT 2 can now be purchased!
- June 9, 2008: xCAT breaths life into (at the time) the fastest supercomputer on the planet
- May 30, 2008: xCAT 2.0 for Linux officially released!
- Oct 31, 2007: IBM open sources xCAT 2.0 to allow collaboration among all of the xCAT users.
- Oct 31, 1999: xCAT 1.0 is born!
xCAT started out as a project in IBM developed by Egan Ford. It was quickly adopted by customers and IBM manufacturing sites to rapidly deploy clusters.