• Participate in Agile development on a large Hadoop-based data platform as a member of a distributed team.
• Code programs to load data from diverse data sources into Exadata and Hive structures using SPARK and SQL
• Translate complex functional and technical requirements into detailed design.
• Analyze vast data stores. Code business logic using SQL/Scala on Apache Spark.
• Create jobs using Autosys and wrapper scripts with Unix shell scripting
• Code and test prototypes. Code to existing frameworks where applicable.
Apache, Shell, UNIX, SQL, hadoop, Scala, Kafka