Hadoop Administration Course

Learn Hadoop Administration In-depth

Advanced

Approx. 9 days
Assumes 1 hr/day (work at your own pace)

Loading

loading
COMING SOON
SELF PACED
{{x.displayName}}
{{x.inPrice}} {{x.usPrice}}   {{x.discountedINPrice}} {{x.discountedUSPrice}}

* Service Tax extra

Loading

loading

  1. 7m 58s
    1. Introduction to Hadoop Administration
      1m 38s
    2. Prerequisites
      1m 25s
    3. What you will learn
      4m 55s
  2. 12m 56s
    1. What is Big Data
      3m 16s
    2. What is Apache Hadoop
      2m 56s
    3. HDFS in Hadoop
      3m 34s
    4. HDFS Architecture
      3m 10s
  3. 50m 23s
    1. Download Virtual Box
      2m 32s
    2. Downloading CentOS
      2m 14s
    3. Set a machine in VirtualBox
      3m 10s
    4. Add Centos iso
      2m 41s
    5. Start Your Machine
      3m 45s
    6. Start machines and make way to communicate
      7m 12s
    7. Install Java in VirtualBox on CentOS
      4m 13s
    8. Download Hadoop
      1m 26s
    9. Add group and user
      3m 10s
    10. Generating ssh key
      4m 59s
    11. Distributing the keys
      6m 55s
    12. Setting java
      3m 34s
    13. Hadoop path setting
      4m 32s
  4. 39m 13s
    1. Starting With hadoop
      3m 13s
    2. Changing Conf files
      6m 19s
    3. Format Namenode
      4m 10s
    4. Starting the daemons
      2m 45s
    5. Checking the file system
      2m 26s
    6. Creating a Directory and putting data
      2m 51s
    7. Demerit to store data in tmp
      5m 40s
    8. creating parent directory to permanently store the data
      6m 30s
    9. Start Admin commands
      1m 31s
    10. Browser Interface
      3m 48s
  5. 45m 19s
    1. Setting the pesudo dist mode
      13m 28s
    2. viewing the fully distributed mode
      3m 40s
    3. Using a Script file to monitor the cluster
      1m 42s
    4. Viewing under replicated blocks
      7m 47s
    5. Setting the third machine
      6m 25s
    6. Bringing up the third machine
      7m 17s
    7. OverReplicated Blocks
      5m 01s
  6. 58m 55s
    1. metasave command
      2m 16s
    2. Dynamically Write Data with different Replication
      2m 16s
    3. Rack Awareness
      4m 43s
    4. Enabling Rack Awareness
      6m 35s
    5. default Rack
      3m 31s
    6. Working of Secondary Namenode
      5m 33s
    7. Secondary namenode working in the cluster setup
      3m 30s
    8. Changes in Cluster viewing the fsimage
      8m 00s
    9. Shutting down the namenode
      8m 30s
    10. Manually talking to namenode
      7m 55s
    11. Safemode in hadoop
      3m 45s
    12. Using Namespace
      5m 21s
  7. 54m 15s
    1. Commissioning and decommissioning Nodes
      1m 45s
    2. Setting for commissioning decommissioning
      6m 29s
    3. Decomissioning of Nodes
      5m 09s
    4. Commissioning in Hadoop
      6m 02s
    5. Balancer
      3m 09s
    6. Backing up the data
      5m 48s
    7. Backing up Data continued
      05m 22s
    8. Restore the data
      1m 52s
    9. Deleted data Permanently
      03m 09s
    10. Disctp Introduction
      1m 04s
    11. Working with Distcp
      3m 20s
    12. Distcp across the cluster
      8m 30s
    13. Disctcp log files
      2m 36s
  8. 50m 29s
    1. Namenode Crashes
      8m 08s
    2. Corrupt and Missing Blocks
      2m 53s
    3. Starting the second datanode
      4m 03s
    4. Starting Namenode on old machine
      4m 45s
    5. Starting Namenode on old machine II
      3m 52s
    6. Working with updated metadata
      2m 44s
    7. Multiple Paths to run the cluster
      3m 35s
    8. NFS
      1m 00s
    9. NFS Settings
      1m 30s
    10. Start NFS
      4m 17s
    11. Start the cluster
      4m 12s
    12. Deleting Primary Path of Namenode
      2m 40s
    13. Getting metadat from secondary to primary location
      3m 13s
    14. Starting cluster Normally
      3m 37s
  9. 36m 20s
    1. Need of Mapreduce
      2m 13s
    2. Data Processing Framework in Hadoop
      3m 15s
    3. Understanding MapReduce in Hadoop
      2m 15s
    4. Stats for Mapreduce
      3m 06s
    5. Starting the inbuilt Mapreduce Program
      6m 57s
    6. Running Mapreduce Program
      6m 35s
    7. Submit Multiple Mapreduce Jobs
      2m 14s
    8. Kill a Job and Find Status of a Job
      2m 04s
    9. Change job Priority
      3m 05s
    10. Output of Mapreduce Job
      4m 36s
  10. 21m 56s
    1. Schedulers in Hadoop
      1m 57s
    2. Goals of Fair Scheduler
      2m 05s
    3. Fair Schedulers Basic Concept
      2m 35s
    4. Settings for Fair Scheduler
      6m 38s
    5. Submitting jobs using Fair Schedulers
      2m 39s
    6. Capacity Scheduler
      6m 02s
  11. 35m 39s
    1. Apache Flume
      2m 48s
    2. Apache Sqoop
      2m 01s
    3. install MySQL
      2m 53s
    4. Enter MySQL with a apassword
      2m 07s
    5. Downloading sqoop
      1m 48s
    6. Downloading MySQL connector
      1m 41s
    7. Extracting the tar files
      4m 19s
    8. Changes to bashrc
      1m 49s
    9. MySQL connector in sqoop lib
      1m 38s
    10. Connection Check bw sqoop and MySQL
      3m 38s
    11. List Tables in Sqoop
      1m 53s
    12. Sqoop Import process
      3m 22s
    13. Importing data in HDFS
      5m 425s
  12. 17m 01s
    1. Hadoop Cluster Planning
      1m 07s
    2. Start Planning the Cluster
      2m 44s
    3. Network For cluster planning
      3m 13s
    4. Memory and Disk Issue while planning the cluster
      3m 47s
    5. Hardware Requirements
      4m 41s
    6. CPU issue in planning cluster
      1m 29s
  13. 18m 18s
    1. HDFS Quota
      3m 30s
    2. Commands For Setting Quotas
      4m 43s
    3. Reporting Command
      1m 59s
    4. Run Command for NameQuota
      2m 37s
    5. Set Space Quota
      5m 29s
  14. 34m 58s
    1. Introducing Hadoop 2.x
      4m 00s
    2. UnLinking Hadoop
      1m 47s
    3. Locating Jars and Config Files
      2m 53s
    4. Setting the config files
      5m 45s
    5. Format The Namenode
      3m 34s
    6. Staring the Daemons
      1m 52s
    7. BlockPool Id
      2m 52s
    8. Hadoop 2.x
      4m 02s
    9. NamespaceID in Hadoop2 for DataNodes
      2m 17s
    10. Secure Copy
      2m 23s
    11. Hadoop 2.x Cluster Architecture
      4m 25s
  15. 23m 28s
    1. Hadoop in secure Mode
      1m 31s
    2. Entities in Security Process
      0m 57s
    3. Keys in security process
      1m 52s
    4. Other keys in Security Process
      1m 24s
    5. attain the TGT
      3m 25s
    6. Attaining service Ticket
      3m 27s
    7. Attaining The authentication
      3m 07s
    8. Staring with Kerberos in CentOS
      2m 12s
    9. Creating Database and view the ticket Generated
      5m 33s

Subscribe to know about new updates and courses

Email is Required. Invalid Email Address.

Thank You! Your email subscription request has been accepted.

Recommended Courses

Loading