HDFS, HBase, YARN, Spark, MapReduce, Kafka, Hive
*Be part of a global tech team to provision and manage the Hadoop Ecosystem Infrastructure.
*Support global corporate infrastructure with offices in U.S & China.
*Monitor all infrastructure components, applications and provide emergency response.
*Document system/network design and operation procedures.
*Work with global Operation and Engineer teams to contribute for designing and building large scaling system.
*5+ years of working experience with 3+ years in operations and 1+ years in Hadoop technology.
*Hands-on experience in operations of Hadoop based data platform (HDFS, HBase, YARN, Spark, MapReduce, Kafka, Hive).
*Strong understanding of best practices for software engineering, system design and scalable fault tolerant web architecture.
*Knowledge in LDAP, TCP, NTP, SNMP, SMTP, ARP, HTTP, SSH, RSYNC, SSL, using Linux.
*Familiar with monitoring system such as Nagios, Zabbix, Splunk, ELK, etc
*Excellent English written and verbal communication skill.
*Working experience utilizing agile method such as scrum.
*BS degree in Computer Science or Engineering.
*Strong understanding of the Hadoop system and can troubleshooting issues based on source code is a big plus.
*Experience in supporting production environment using AWS, Alicloud, Baidu cloud or similar technology is a plus.
*Great passion for new technology and strong learning ability will be a big plus.