Search our Blogs
Showing results for 
Search instead for 
Do you mean 
 

Ansible and how it integrates with HPE OneView, ICsp and HPE iLO

by Hongjun Ma, HPE Hybrid IT Pre-Sales

 

Ansible OneView graphic.pngIn the new era of hybrid IT and cloud, customers are increasingly seeking the modern provisioning and management platforms and tools to increase business elasticity along with scalability. DevOps has become the mainstream practice to drive enterprise IT agility. As a server management and application deployment DevOps tool, Ansible can help customers set up IT infrastructure quicker by easy integration with the existing IT management infrastructure and resources.

 

HPE OneView, HPE Insight Control server provisioning (ICsp), and HPE Integrated Lights-Out (iLO) for HPE ProLiant servers offer a uniform way of interacting with hardware resources by providing a RESTful API foundation and corresponding Python library.

 

This article walks through how users can leverage Ansible, HPE OneView, ICsp, and HPE iLO for ProLiant RESTful API to deploy HPE servers and resources with agility. 

 

What is Ansible and how it works

First released in 2014, Ansible is a server configuration management and application stack deployment tool.

Compared with previous server configuration management DevOps tools, Ansible doesn’t require agents to be installed on the managed servers. Instead, Ansible manages the IT infrastructure by using SSH protocol to communicate the managed resources. This dramatically simplifies the configuration of managed systems for two reasons—no process daemons need to run on the remote servers to communicate with a central controller and IT administrators aren’t required to manage or maintain agents on each managed node.

 

In recent Python releases, Ansible is written in Python and can now be run on common Linux® platforms. Managed nodes need to have SSH ports open to communicate with Ansible control machine and are also required to have Python installed. Common methods of Ansible installation include clone from source code, RPM installation from EPEL for RHEL or CentOS or Fedora customers, and PPA installation for Ubuntu customers. Users can also leverage Python package manager “pip” to install Ansible.

 

Inventory

Ansible can communicate with multiple managed nodes at the same time. The managed nodes are defined in the Ansible inventory file. The default path for inventory file location is /etc/ansible/hosts but users can specify a customized file location.

Here’s what a plain text inventory file looks like:

mail.example.com

 

[webservers] www1.example.com www2.example.com

 

[dbservers] db0.example.com db1.example.com

 

The strings in brackets are group names, which are used in classifying systems and deciding what systems you are controlling, at what times, and for what purpose. Ansible allows you to define variables for your hosts directly in host INI file.

host1.example.com                   ansible_ssh_host=192.168.1.100

 

 The preferred practice in Ansible is actually not to store variables in the main inventory file. In addition to storing variables directly in the INI file, host, and group variables can be stored in individual files relative to the inventory file. These variable files are in YAML format.

 

Assuming the inventory file path is:

/home/user01/inventory

 

 If the host is named “host1” and in groups “group1”, variables in YAML files at the following locations will be made available to the host:

/home/user01/inventory/group_vars/group1

/home/user01/inventory/host_vars/host1

 

 Valid file extensions include “.yml”, “.yaml”, “.json”, or no file extension.

Ansible extensively utilizes YAML syntax for its playbooks and variables definitions, because it is easier for humans to read and write than other common data formats such as XML or JSON. Further, there are libraries available in most programming languages for working with YAML.

 

YAML

YAML stands for YAML Ain’t Markup Language. It is a very simple, text or human-readable annotation format that can be used to store data.

  • YAML is case sensitive.
  • YAML does not allow the use of tabs, instead, spaces are used, as tabs are not universally
  • Whitespace indentation is used to denote structure.
  • YAML uses three hyphens “---” to start the documentation

 

YAML uses data types such as dictionaries and lists to organize data. Dictionaries organize the data in key-value pairs, with the key and the value being separated by a colon “:”.

key: value

anotherkey: another value

Lists collect a number of similar things into one data structure. They are created by prefixing one or more consecutive lines with a “-”.

  • item 1

- 23.42

- 57

  • true

Modules

Modules are the ones that do the actual work in Ansible. They are the ones that get executed in a command line or playbook.

ansible webserver -m ping

 

 Each module supports taking arguments. Ansible has a rich library of modules helping users to achieve different system management and orchestration tasks.

# Example from Ansible Playbooks.

- command: /sbin/shutdown -t now

 

Playbook

Although Ansible is able to manage systems by-passing command line options, the core of Ansible’s configuration, deployment, and orchestration languages are Ansible playbooks. If Ansible modules are the tools in your workshop, playbooks are your design plans.

Playbooks are designed to be readable and are developed in a basic text language.

The following playbook example will install and start httpd service using yum package manager on all web servers listed in host inventory file.

---

  • hosts: webservers remote_user: root

 http://h22168.www2.hp.com/composable_infra/partner_program/us/en/index.htmltasks:

  • name: Install HTTPD service yum: name=httpd state=latest
  • name: Start the service

service: name=httpd state=started enabled=yes

 

Roles

While it’s possible to write a playbook in one large file, for reusability and better organization, it’s better to break down playbooks into individual tasks and handlers. The best way to organize playbooks is to use roles. Roles are ways of automatically loading certain vars_files, tasks, and handlers based on a known file structure. Grouping content by roles also allows easy sharing of roles with other users.

A sample project structure is as follows:

site.yml webservers.yml fooservers.yml roles/

common/ files/ templates/ tasks/ handlers/ vars/ defaults/ meta/

webservers/ files/ templates/ tasks/ handlers/ vars/ defaults/ meta/

 

 

Developing Ansible modules

Ansible’s existing modules are sufficient to cover approximately 90 percent of the tasks. However, there may be existing scenarios where you need something more. To deal with this remaining 10 percent, one may look into developing their own custom modules. These modules can be used in the Ansible playbooks.

Modules can be written in any language, and are found in the path specified by ANSIBLE_LIBRARY or the “--module-path” command line option. The directory “./library”, alongside your top-level playbooks, is also automatically added to a search directory.

 

HPE ProLiant server programming 

HPE ProLiant servers can be provisioned and managed using various solutions including HPE OneView, ICsp, or ProLiant iLO port. HPE OneView delivers a unified management platform that supports HPE ProLiant rack servers, HPE BladeSystem, HPE Synergy, HPE 3PAR StoreServ Storage, and HPE ConvergedSystem 700 platforms. It provides server profile and template to automate server provisioning along with server firmware upgrade capability. It can further integrate with other industry-wide solutions such as VMware vCenter™, Microsoft® System Center, and Brocade Network Advisor for an end-to-end system integration.

 

ICsp is the solution providing ProLiant servers for automated bare-metal OS installation, BIOS, firmware and Smart Array operation.

 

Representational State Transfer (REST) is a Web service that uses basic Create, Read, Update, and Delete (CRUD) operations performed on resources using HTTP POST, GET, PUT, and DELETE.

 

HPE OneView and ICsp virtual appliances have a resource-oriented architecture that provides a unified RESTful interface. Every resource has one Uniform Resource Identifier (URI) and represents a physical device or logical construct. You can use RESTful APIs to manipulate resources.

In addition to HPE OneView and ICsp, RESTful API will become the main management API for HPE iLO 4-based HPE servers. Its feature set will (in time) become larger than the existing HPE iLO XML API (RIBCL) and IPMI interfaces. Using this API, you can take the full inventory of the server, control power and reset, configure BIOS and HPE iLO 4 settings, fetch event logs, as well as perform many other tasks. HPE ProLiant Gen9 and Gen8 iLO 4 supports RESTful API from iLO 4 FW v2.0 and later. For Distributed Management Task Force (DMTF) Redfish spec 1.0 conformance, iLO FW v2.3 and later is required. Compared with Gen8 servers, Gen9 iLO 4 will deliver more data and configuration options from RESTful interface, including BIOS setting configuration.

 

Interacting with HPE ProLiant server's RESTful API interface

RESTful API provides consistent data access hierarchy to various client-side applications and program languages. This includes Web browser RESTful plugin, curl CLI tool, Python, and PowerShell. The following screenshot shows Chrome browser Postman REST plugin for HPE ProLiant iLO interface to get the HPE iLO NIC information.

 

Hongjun_screenshot 1.png

 

Figure 1. HPE Server iLO RESTful interface

 

 

HPE iLO 4 RESTful API uses Base64 to encode username and password strings as HTTP authorization header field for various GET/PUT/POST/DELETE operation.

HPE OneView and Insight Control server provisioning use secure session ID to authenticate with the virtual appliance. When you log in to the appliance using the login-sessions REST API, a session ID is returned. You use the session ID in all subsequent RESTful API operations in the “Auth” header. The session ID is valid for 24 hours.

 

The following screenshot demonstrates an initial HPE OneView RESTful login POST action to retrieve a session ID for use with operations that follow. The appliance username and password strings are included in POST body using JSON format. The response from the appliance will return a session ID.

 

hongjun_screenshot2.png

Figure 2. HPE OneView RESTful interface

 

 

The follow-on RESTful actions are required to include the session ID returned in the HTTP “Auth” header unless the exceptions specified in the   HPE OneView and ICsp RESTful API reference guides.

 

The following screen retrieves the HPE OneView server profiles list using the retrieved session ID with “Auth” HTTP header.

 

Hongjun_screenshot3.png

 

Figure 3. HPE OneView RESTful response for retrieving server profiles

 

 

HTTP “X-Api-Version” header is also required in most of the HPE OneView and ICsp RESTful operations.

 

The following table lists API version numbers corresponding to the HPE OneView releases.

 

Table 1. HPE OneView release and RESTful API version matrix

HPE OneView release

RESTful API version

1.0, 1.01

3

1.05

4

1.10

101

1.20

120

2.0

200

 

To automate the HPE OneView and ICsp login authorization process, users can store session ID value in Python or PowerShell object variable.

 

The following sample demonstrates a Linux bash script to log into HPE OneView to retrieve the token in JSON body and use Linux tools to trim output to the exact session ID string for the follow-on GET operation to use.

$  cat  curl-ov-get.sh

#!/bin/bash

 

var1=$(curl -s --insecure -H “Content-Type: application/json" -H “X-API-Version: 200" --data ‘

{

“userName": “Administrator",

 

“password": “yourpassword"

}' -X POST https://10.16.160.10/rest/login-sessions | python -mjson.tool | grep sessionID | cut –d‘:'

-f2 ); \

var2=$(echo $var1 | sed ‘s#"##g');      \ curl  \

-v \

-s \

--insecure \

-H “Content-Type: application/json" \

-H “X-Api-Version: 200" \

-H “Auth: $var2" \

-X GET \

https://10.16.160.10/rest/$1  \

| python -mjson.tool

 

 

The output of the script execution shows the result of GET Ethernet networks from HPE OneView virtual appliance.

$  ./curl-ov-get.sh  ethernet-networks

*      Trying 10.16.160.10...

* Connected to 10.16.160.10 (10.16.160.10) port 443 (#0)

  • TLS 2 connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  • Server certificate: hpe-oneview-01.hpdia.local

>   GET   /rest/ethernet-networks   HTTP/1.1

>  Host:  10.16.160.10

>  User-Agent:  curl/7.43.0

> Accept: */*

>  Content-Type:  application/json

>  X-Api-Version:  200

> Auth: LTMwODU2MTgwNzI2S8SgpEzKLf5ac39GY0bCqhg_oOm0y-oQ

< HTTP/1.1 200 OK

< Date: Tue, 22 Mar 2016 17:50:45 GMT

<  Server:  Apache

<   Content-Type:   application/json;charset=UTF-8

< Via: 1.1 example.com

<  cache-control:  no-cache

<  Transfer-Encoding:  chunked

{ [820 bytes data]

* Connection #0 to host 10.16.160.10 left intact

{

“category": “ethernet-networks", “count":  1,

“created": null,

“eTag": null,

“members": [

{

“category": “ethernet-networks",

“connectionTemplateUri":     “/rest/connection-templates/ff98435d-1f38-4183-b607- c6460a57c978",

“created":  “2016-02-18T07:21:27.837Z",

“description": null,

“eTag": “70377939-d230-42a7-87e6-8f6c9e405a7d",

“ethernetNetworkType":  “Tunnel",

“fabricUri": “/rest/fabrics/62484ade-7c2f-4586-a401-6de73da1ce7d", “modified”:  “2016-02-18T07:21:27.840Z",

“name":  “vc-tunnel-net-A", “privateNetwork": false, “purpose": “General", “smartLink": true,  “state":  “Active",

“status": “OK",

“type":  “ethernet-networkV3",

“uri": “/rest/ethernet-networks/fb1f4430-f2ed-431f-8aa6-47bcbb8c440e", “vlanId":  0

}

],

“modified": null, “nextPageUri": null, “prevPageUri": null, “start":  0,

“total": 1,

“type":  “NetworkCollectionV3",

“uri":   “/rest/ethernet-networks?start=0&count=1"

}

$

 

For Python developers, HPE OneView, ICsp, and HPE iLO offer GitHub library repository to interact with RESTful API. The reference links are attached at the end of this article.

 

Ansible integration with HPE OneView and HPE iLO 4 SDK

By leveraging existing HPE OneView, ICsp, and HPE iLO 4 Python modules, users can follow Ansible’s common module boilerplate to connect Ansible playbook and Python modules.

Key parts include always ending the Python module file with:

from ansible.module_utils.basic import *

 

if  name  

main()

 

== ‘ main  ':

 

And instantiating the module class such as:

module = AnsibleModule( argument_spec = dict(

state         =   dict(default=‘present',   choices=[‘present',   ‘absent']), name        =  dict(required=True),

enabled      =   dict(required=True,   type=‘bool'), something   =   dict(aliases=[‘whatever'])

)

)

 

The “AnsibleModule” provides lots of common code for handling returns, parses your arguments for you, and allows you to check inputs. Successful returns are made like this:

module.exit_json(changed=True,  something_else=12345)

 

And failures are just as simple (where “msg” is a required parameter to explain the error):

module.fail_json(msg=“Something fatal happened")

 

 

Ansible with HPE OneView and ICsp

Ansible can be used as an orchestration engine to provision HPE OneView server profile and OS images using ICsp. A sample code repository of this integration can be located at github.com/HewlettPackard/oneview-ansible

 

The following example assumes the user has HPE OneView and ICsp appliances previously set up, and HPE OneView 2.0 has a server profile template defined. Ansible playbook tasks orchestrate the Python RESTful calls to HPE OneView and ICsp. Then available server blades are assigned to server profiles from the template and finally booted into ICsp in order to have RHEL 7.1 OS installed on bare-metal server blades.

 

The sample directory layout is shown as follows:

├──  oneview-web-farm

│      ├──   ov_site.yml

│      ├── roles

│      │      ├──  hpe-oneview-server

│      │      │      └──  tasks

│      │      │             ├──   deploy.yml

│      │      │             ├──  main.yml

│      └── test-env

│             ├──  group_vars

│             │      ├── all

│             │      └──  webservers

│             └── hosts

 

The main playbook will be “ov_site.yml”. The main playbook targets web servers in the inventory host file. Since the target servers do not have an OS installed yet, this playbook does not use SSH to interact with the Web hosts. Instead, it delegates these tasks to the local host using the

HPE OneView Python library to allow communication with the HPE OneView appliance. It then creates the server profiles on the physical servers from the HPE OneView template. We disabled any “gather facts” toward the local host, and delegated the tasks to the “hpe-oneview-server” role.

$  cat  ov_site.yml

---

# This playbook deploys the whole application stack in this site.

 

- hosts: webservers gather_facts: no roles:

- hpe-oneview-server

 

The host profile is under “./test-env” sub directory and a brief sample shows web servers are defined with group hierarchy.

$ cat hosts [webservers]

demo-web1 ansible_ssh_host=10.16.160.131 demo-web2 ansible_ssh_host=10.16.160.132 demo-web3   ansible_ssh_host=10.16.160.133

 

[all-servers:children] webservers

 

The main playbook under “hpe-oneview-server” role includes one deploy playbook. Additional playbooks for the role can be included if needed.

$ cat main.yml

---

# Create server profiles, Deploy OS

- include: deploy.yml

 

The “deploy.yml” playbook will call Python module “ov_server” to create server profiles from the HPE OneView server template and apply to the available server blades. After the server profiles are in place, Ansible will initiate the second task to power on servers, then call the Python module to instruct the ICsp appliance to provision an OS on the just booted bare-metal servers.

The ICsp module will retrieve the server_profile JSON object returned from the previous HPE OneView module and use the value of “serialNumber” as the ICsp build plan “server_id”.

$ cat deploy.yml

---

  • name: Create Server Profiles ov_server:

oneview_host: “{{ oneview }}" username: “{{ ov_username }}" password: “{{ ov_password }}" server_template: “{{ ov_template }}" name: “{{ inventory_hostname }}"

delegate_to: localhost

 

  • name: Power on servers ov_server:

oneview_host: “{{ oneview }}" username: “{{ ov_username }}" password: “{{ ov_password }}" name: “{{ inventory_hostname }}" state: “powered_on"

when: server_hardware.powerState == “Off" delegate_to:   localhost

 

  • name: Deploy OS hpe_icsp:

icsp host: “{{ icsp }}" username: “{{ icsp_username }}" password: “{{ icsp_password }}"

server_id: “{{ server_profile.serialNumber }}" os_build_plan: “{{ os_build_plan }}" custom_attributes: “{{ osbp_custom_attributes }}" personality_data: “{{ network_config }}"

when: created      #only if we just created the server. No re-deployment delegate_to:  localhost

 

The variables used in the earlier playbook used a Jinja2 template format. We can define Ansible variables in host inventory files or under

sub-directory “group_vars” of the inventory file. The following examples show some variable definitions such as HPE OneView and ICsp appliance IP’s login credentials.

 

$ cat all

---

# Variables here are applicable to all host groups

 

oneview: 10.16.160.10 ov_username: Administrator ov_password:  yourpassword ov_template:  hj-10g-2nics

 

icsp: 10.16.160.6

icsp_username:   Administrator icsp_password: yourpassword

 

subnet_mask: 255.255.255.0

gateway:  10.16.160.1

 

os_build_plan: ‘ProLiant OS - RHEL  7.1 x64 Scripted Install' osbp_custom_attributes:

  • SSH_CERT: “{{ lookup(‘file', ‘~/.ssh/id_rsa.pub') }}" network_config:

hostName: “{{ inventory_hostname}}" displayName: “{{ inventory_hostname}}" domainName: “hpdia.local"

nics:

  • mask: “{{subnet_mask}}" dhcp: false

macAddress: “{{ server_profile.connections[0].mac }}" ip4Address: “{{ ansible_ssh_host }}"

gateway: “{{ gateway }}" dns: 10.16.43.247

 

The source Python module for HPE OneView and ICsp can be any directory included in “PYTHONPATH” environment variable. The “ov_server” Python module can import the base HPE OneView Python library.

import hpeOneView as hpeov

 

The “ov_server” main function will use an Ansible Python module boilerplate to have variables passed from the Ansible playbook, into Python code. The following example passed HPE OneView host IP address, login username and password, and HPE OneView server template name. The name variable will be used as the server profile name and is passed using Ansible “inventory_hostname”.

 

def main():

module = AnsibleModule( argument_spec=dict(

oneview_host=dict(required=True,  type=‘str'), username=dict(required=True,  type=‘str'), password=dict(required=True,  type=‘str'), server_template=dict(required=False,   type=‘str'), state=dict(

required=False, choices=[

‘powered_on', ‘powered_off', ‘present', ‘absent', ‘compliant', ‘no_op'

],

default=‘present'), name=dict(required=True,  type=‘str'),

server_hardware=dict(required=False, type=‘str', default=None)))

 

oneview_host  =  module.params[‘oneview_host']

credentials   =   {‘userName':   module.params[‘username'],   ‘password':   module.params[‘password']} server_template_name  =  module.params[‘server_template']

server_name = module.params[‘name'] state   =   module.params[‘state']

 

try:

con = hpeovconnection(oneview_host)

 

con.login(credentials) servers  =  hpeov.servers(con) server_template = None

if server_template_name:

server_template   =   servers.get_server_profile_template_by_name(server_template_name)

 

# check if the server already exists - edit it to match the desired state server_profile = servers.get_server_profile_by_name(server_name)

if server_profile:

if state == ‘present':

changed  =  update_profile(con,  server_profile,  server_template) facts=gather_facts(con,      server_profile)

module.exit_json(

changed=changed, msg=‘Updated profile', ansible_facts=facts

)

elif state == ‘absent': delete_profile(con,  server_profile) module.exit_json(

changed=True, msg=‘Deleted profile'

)

elif state in [“powered_on", “powered_off"]: set_power_state(con, server_profile, state) module.exit_json(

changed=True, msg=‘Set power state'

)

elif state in [“compliant"]:

changed  =  make_compliant(con,  server_profile) module.exit_json(

changed=changed, msg=‘Made compliant', ansible_facts=gather_facts(con,

server_profile)

)

elif  state  in  [‘no-op']: module.exit_json(

changed=False, ansible_facts=gather_facts(con, server_profile)

)

 

else:

if state in [“powered_on", “powered_off"]:   module.fail_json(msg=“Cannot find server to put in state:" + state)

# we didnt find an existing one, so we create a profile elif  state  in  [‘present']:

server_profile  =  create_profile(module,  con,  server_name,  server_template) facts   =   gather_facts(con,   server_profile)

facts[‘created']   =   True module.exit_json(

changed=True, msg=‘Created profile',         ansible_facts=  facts

)

except Exception, e: module.fail_json(msg=e.message)

 

A server profile named “demo-web1” will be created from the existing defined server template by calling create_profile function as shown later. Many functions and class variables are based on HPE OneView Python library and modified to add server template functions.

 

def  create_profile(module,  con,  server_name,  server_template): srv= hpeov.servers(con)

# find servers that have no profile, powered off mathing SHT SHT = con.get(server_template[‘serverHardwareTypeUri']) server_hardware_name = module.params[‘server_hardware']

 

tries = 0

while tries < 10: try:

tries += 1

if  server_hardware_name:

selected_server_hardware = srv.get_server_by_name(server_hardware_name) if selected_server_hardware is None:

module.fail_json(msg=“Invalid  server  hardware") selected_sh_uri   =   selected_server_hardware[‘uri']

else:  #  we  need  to  find  an  available  server.

#we may need to try this multiple times just in case someone else is also trying to use an available server.

#Lets use a file lock so that ansible module concurrency does not step cause this on each other

 

available_server_hardware = srv.get_available_servers(server_hardware_type=SHT) if available_server_hardware[‘targets'].count == 0:

module.fail_json(msg=‘No Servers are available')

 

#power  off  the  server selected_sh_uri       = None

#targets will list empty bays. We need to pick one that has a server index = 0

while   selected_sh_uri   ==   None   and   index   <   len(available_server_hardware[‘targets']): selected_sh_uri        =        available_server_hardware[‘targets'][index][‘serverHardwareUri'] index = index + 1

selected_server_hardware   =   con.get(selected_sh_uri)

 

srv.set_server_powerstate(  selected_server_hardware,  ‘Off',  True) server_profile   =   srv.new_server_profile_from_template(server_template) server_profile[‘name']  =  server_name

 

server_profile[‘serverHardwareUri']   =   selected_sh_uri

 

 

return  srv.create_server_profile(server_profile) except Exception, e:

# if this is because the server is already assigned, someone grabbed it before we assigned,  ignore  and  try again

 

#  module.fail_json(msg=e.message) time.sleep(random.randint(2,5)*tries) pass

 

raise Exception(“Could no allocate server hardware")

 

The following screenshots demonstrate profiles created in HPE OneView and the servers booted up into ICsp server for OS provisioning.

 

Hongjun_screenshot4.png

 

Figure 4. HPE OneView GUI for server profiles

 

 

 hongjun_screenshot5.png

 

Figure 5. HPE Insight Control server provisioning managed servers

 

 

Figure 6 demonstrates the orchestrated playbook workflow from the Ansible side.

$ ansible-playbook ov_site.yml -i test-env/hosts

 

Hongjun_screenshot6.png 

 

Figure 6. Ansible output for task flow execution

 

Users can SSH into the servers provisioned to verify information such as the following IP address:

[root@demo-web2 ~]# ip addr show ksdev0

4: ksdev0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether    46:18:9c:b0:00:17    brd    ff:ff:ff:ff:ff:ff

inet 10.16.160.132/24 brd 10.16.160.255 scope global ksdev0 valid_lft  forever  preferred_lft  forever

 

Ansible with HPE iLO 4

When working with HPE iLO 4 RESTful API, Ansible can orchestrate tasks against multiple HPE iLO 4 targets at the same time. This includes operations on updating HPE iLO licenses, retrieving HPE iLO information, and more.

The following example defines Ansible HPE iLO 4 username and password in “group_vars” for all hosts’ variables.

$ cat all

---

# all group variables

 

ilo_username:  Administrator ilo_password:  yourpassword

 

The first task in the Ansible playbook used the shell module to create an inventory file and put the HPE iLO 4 inventory header fields. The second list in the playbook launched Python module “dia_ilo_python_module” against all hosts in the inventory file.

$  cat  dia-ilo.yml

---

# YAML file for DIA servers iLO service

  • hosts: localhost connection: local gather_facts: False

 

tasks:

- name: Create inventory csv file and populate header

shell: echo ‘inventory-name, mac, ip, mask, mode, gw' > ansible-dial-ilo-inventory.csv

 

  • hosts: all connection: local gather_facts: False

 

tasks:

- name: Get iLO information dia_ilo_python_module:

dia_ilo_inventory_name: “{{ inventory_hostname }}” dia_ilo_ip:  “{{  ilo_ip  }}"

dia_ilo_user: “{{ ilo_username }}" dia_ilo_pass: “{{ ilo_password }}"

 

The hosts include HPE iLO inventory name and IP information grouped by server models and generations.

cat hosts.txt

 

[gen8-sl2500-servers]

ilo-sl2500-02-A                                   ilo_ip=10.16.40.8 ilo-sl2500-02-B                                   ilo_ip=10.16.40.9

 

[Apollo-2000-servers]

ilo-apollo-01-A ilo_ip=10.16.40.210 ilo-apollo-01-B ilo_ip=10.16.40.215 ilo-apollo-01-C ilo_ip=10.16.40.220 ilo-apollo-01-D         ilo_ip=10.16.40.225

 

[gen9-rack-servers]

ilo-dl380-gen9-01                                   ilo_ip=10.16.40.3 ilo-dl380-gen9-02                                   ilo_ip=10.16.40.4

 

The Python module, which is called by Ansible, would collect HPE iLO network information and write to the inventory file.

def  find_iLO_network_info(dia_ilo_inventory_name,  dia_ilo_ip,  iLO_loginname,  iLO_password): f = open(‘ansible-dial-ilo-inventory.csv', mode=‘a')

# for each system in the systems collection at /rest/v1/Systems

for status, headers, manager, memberuri in collection(dia_ilo_ip, ‘/rest/v1/Managers', None, iLO_loginname, iLO_password):

 

# verify expected type

# hint:    don’t limit to version 0 here as we will rev to 1.0 at some point hopefully with minimal changes

assert(get_type(manager)  ==  ‘Manager.0’  or  get_type(manager)  ==  ‘Manager.1’)

 

# for each system in the systems collection at /rest/v1/Systems for  status,  headers,  nic,  memberuri  in  collection(dia_ilo_ip,

manager[‘links’][‘EthernetNICs’][‘href'],   None,   iLO_loginname,   iLO_password):

 

# verify expected type

# hint:    don’t limit to version 0 here as we will rev to 1.0 at some point hopefully with minimal changes

assert(get_type(nic)  ==  ‘EthernetNetworkInterface.0'  or  get_type(nic)  == ‘EthernetNetworkInterface.1')

 

if (nic[‘Name'] == “Manager Dedicated Network Interface"):

 

hostname = dia_ilo_inventory_name ilo_mac   =   nic[‘FactoryMacAddress’]

ilo_ip     =     nic[‘IPv4Addresses’][0][‘Address’] ilo_ip_subnet_mask  =  nic[‘IPv4Addresses'][0][‘SubnetMask'] ilo_ip_mode    =    nic[‘IPv4Addresses'][0][‘AddressOrigin'] ilo_ip_GW   =   nic[‘IPv4Addresses'][0][‘Gateway']

 

output = hostname + ‘,' + ilo_mac + ‘,' + ilo_ip + ‘,' + ilo_ip_subnet_mask + ‘,' + ilo_ip_mode + ‘,' + ilo_ip_GW + ‘\n'

 

f.close() return

 

def main():

 

f.write(output)

 

 

module = AnsibleModule( argument_spec = dict(

dia_ilo_inventory_name=dict(required=True,  type=‘str'), dia_ilo_ip  =  dict(required=True,  type=‘str'), dia_ilo_user                     =  dict(required=True,  type=‘str'), dia_ilo_pass                                       =  dict(required=True,  type=‘str'),

state                    =  dict(default=‘present',  choices=[‘present',  ‘absent'])))

 

dia_ilo_inventory_name = module.params[‘dia_ilo_inventory_name'] dia_ilo_ip   =   module.params[‘dia_ilo_ip']

dia_ilo_user   =   module.params[‘dia_ilo_user'] dia_ilo_pass  =  module.params[‘dia_ilo_pass']

find_iLO_network_info(dia_ilo_inventory_name, dia_ilo_ip, dia_ilo_user, dia_ilo_pass) module.exit_json(changed=False)

 

Users can also put Ansible playbook execution CLI lines into bash scripts so the inventory file can be printed using the Python “PrettyTable” module similar to the following figure:

 

hongjun_screenshot7.png 

 

Figure 7. Ansible workflow output within Linux bash script

 

Resources and additional links

HPE OneView Python API library  

github.com/hewlettpackard/python-hponeview

HPE ICsp RESTful API Reference  h17007.www1.hp.com/docs/enterprise/servers/icsp/7.4.1/webhelp/content/c_REST_API_about.html

HPE OneView RESTful API Reference  

h17007.www1.hpe.com/docs/enterprise/servers/oneview2.0/cic-rest/en/content/index.html

HPE ProLiant Python SDK library  github.com/hewlettpackard/python-proliant-sdk

Ansible/HPE OneView library  github.com/hewlettpackard/oneview-ansible

Ansible documentation docs.ansible.com/ansible

Learn more at hpe.com/info/composableprogram

 

 

Comments
Simpat
| ‎07-28-2016 10:16 AM
This is great technocal article I waiting. It would be perfect if you can make a video demo as well.
Level 3 celia_lawren Level 3
| ‎08-01-2016 10:23 AM

Some readers have expressed interest in a demo video of the Ansible-HPE OneView integration. Here is the link: Ansible Role for HPE OneView,  http://hpe-composable-assets.mr-file-serve.com/prod/attachment/5/ansible_role_for_hpe_oneview.mp4  

Steve J W
3 weeks ago

 Would you be able to provide a github link to the complete demo project please? 

Social Media
† The opinions expressed above are the personal opinions of the authors, not of HPE. By using this site, you accept the Terms of Use and Rules of Participation