Map/Reduce is an open source programming model originally built by Google that is used in Hadoop for processing large data sets. Map/Reduce is typically used to make the process of computing on clusters of computers more efficient. Mapping takes a set of data and converts it into another set of data, where individual elements are broken down into a sequence or ordered list of elements, known as tuples. Reduce then takes the output from a map as input and combines those data tuples into a smaller set of tuples.