Bigdata projects can be very expensive and can easily fail: I suggest to start with a small, useful but not critical project. Better if it is about unstructured data collection and batch processing. In this case you have time to get practise with the new technologies and the Apache Hadoop system can have not critical downtimes.
At home I have the following system running on a small Raspberry PI: for sure it is not fast 😉
At work I introduced Hadoop just few months ago for collecting web data and generating daily reports.