We study applications of a simple method for circumventing the ``full randomness assumption'' when building a hashing-based data structure for a set $S$ of keys. The general approach is to ``split'' $S$ into ``pieces'' $S_i$, by a splitting hash function. On a piece $S_i$, a method or data structure for generating full randomness is used that uses more space than $|S_i|$. Under certain circumstances, this data structure can be ``shared'' among the constructions for the pieces $S_i$, which leads to a tighter overall space bound. The method was introduced in the context of cuckoo hashing and its variants, but it seems to have wider applicability. To demonstrate its power and some subtleties, we study three new applications, improving previous constructions: (i) Space-efficient simulation of full randomness on $S$ (following work by Pagh and Pagh (2003/08) and Dietzfelbinger and Woelfel (2003)); (ii) Construction of highly independent functions in the style of Siegel (1989/2004); (iii) One-probe schemes as in work by Buhrman, Miltersen, Radhakrishnan, and Venkatesh (2000/02) and Pagh and Pagh (2002).