DevOps Engineer at Unbxd
Develop / Design effective and scalable solutions to administrate Data clusters, large scale operations, and infrastructure systems.
Architect systems, infrastructure and platforms using Linux and Amazon WebServices to support applications in the BIG Data space.
Own and deliver the implementation of new methods for systems, deployment, monitoring, management, and automation.
Technical depth – Exposure to a wide variety of problem-solving skills and respective automation.
Devise schemes to transfer, monitor, and verify terabytes of data are moved from diverse locations, securely and reliably.
Real-time problem diagnosis/resolution on live systems
Monitor grid health and performance, use critical thinking to find areas for improvement, develop monitoring framework and metrics in order to predict system behavior pro-actively and take appropriate steps.
Hardware and facility capacity planning, provisioning new resources, ability to understand various capacity parameters and its cardinality
Design / propose a solution for security policies in HADOOP ECO system and manage policies effectively.
Infrastructure and platform security.
Infrastructure and platform cost management.
Minimum 3+ years experience in DevOps role:
In depth Linux/Unix knowledge, good understanding the various Linux kernel subsystems (memory, storage, network etc).
Amazon Web Services 3. DNS, TCP/IP, Routing, HA & Load Balancing. Configuration management using tools like Puppet or Chef or Ansible.
Apache Hadoop (optional)
SQL and NoSQL databases like MySQL, PostgreSQL, MongoDB and HBase/Aerospike.
Build and packaging tools like Jenkins and RPM/Yum.
HA and Load balancing using tools like the Elastic Load Balancer and HAProxy.
Monitoring tools like Nagios or similar.
Log management tools like Logstash/Syslog/ElasticSearch or similar.
Metrics collection tools like Ganglia, Graphite, OpenTSDB or similar.
Agile Project Management.
Apply for this position