Be Author Be Seller Become Member now and avail new Offers!
.com Category
Login
kachhua
Redirecting you.... kachhua

Log in

Mobile No / Email


Password



or
Sign Up
kachhua
Redirecting you.... kachhua

Registration

Already have an account? Login Here

Mobile No.
Name
Email
Password

or Signup with
Cart (0 Items)
Subtotal: $0.00
Your cart is empty!
 


The Oozie Orchestration Framework : From 0 to 1

Complex dependencies, managing a multitude of jobs at different time schedules, managing entire data pipelines are all made easy with Oozie
 The Oozie Orchestration Framework : From 0 to 1
  • 2999
logo
: Online Course
: English
: Loonycorn
Displaying 1-4 of 4 result(s).

About Course

  • Prerequisites: Working with Oozie requires some basic knowledge of the Hadoop eco-system and running MapReduce jobs

     Oozie is like the formidable, yet super-efficient admin assistant who can get things done for you, if you know how to ask

    formidable, yet super-efficientOozie is formidable because it is entirely written in XML, which is hard to debug when things go wrong. However, once you've figured out how to work with it, it's like magic. Complex dependencies, managing a multitude of jobs at different time schedules, managing entire data pipelines are all made easy with Oozie

    get things done for youOozie allows you to manage Hadoop jobs as well as Java programs, scripts and any other executable with the same basic set up. It manages your dependencies cleanly and logically. 

    if you know how to askKnowing the right configurations parameters which gets the job done, that is the key to mastering Oozie

Topics covered in this course

What you will get from this course?

    • Install and set up Oozie
    • Configure Workflows to run jobs on Hadoop
    • Configure time-triggered and data-triggered Workflows
    • Configure data pipelines using Bundles
    • Workflow Management: Workflow specifications, Action nodes, Control nodes, Global configuration, real examples with MapReduce and Shell actions which you can run and tweak

    • Time-based and data-based triggers for Workflows: Coordinator specification, Mimicing simple cron jobs, specifying time and data availability triggers for Workflows, dealing with backlog, running time-triggered and data-triggered coordinator actions

    • Data Pipelines using Bundles: Bundle specification, the kick-off time for bundles, running a bundle on Oozie

Who should buy this course?

    • Engineers, analysts and sysadmins who are interested in big data processing on Hadoop
    • Beginners who have no knowledge of the Hadoop eco-system can not purchase this course
    • Students should have basic knowledge of the Hadoop eco-system and should be able to run MapReduce jobs on Hadoop
 
Discussion
 
Provided by
L

Loonycorn is us, Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh. Between the four of us, we have studied at Stanford, IIM Ahmedabad, the IITs and have spent years (decades, actually) working in tech, in the Bay Area, New York, Singapore and Bangalore.

Show more

Invite friends and earn upto 20% of all orders placed by them.

Earn by sharing url

Share the link:

COPY



  kachhua
Help & Support Request a Callback Call us on | India : +919662523399/66