We present DRAIL, a declarative framework for specifying deep relational models. Our framework provides an easy-to-use interface for defining complex models consisting of many interdependent variables, and experimenting with different design choices, learning algorithms and neural architectures. We demonstrate the importance of correctly modeling the interactions between learning, representation and inference by applying DRAIL to two challenging relational learning problems, combining textual and social information. Then, we introduce a relational zero-shot learning task and explore the deep component of DRAIL to work in this setting.