Detecting a specific horizon in seismic images is a valuable tool for geologic interpretation. Because hand picking the locations of the horizon is a time-consuming process, automated computational methods were developed starting three decades ago. Until now, most networks have been trained on data that were created by cutting larger seismic images into many small patches. This limits the networks ability to learn from large-scale geologic structures. Moreover, currently available networks and training strategies require label patches that have full and continuous horizon picks (annotations), which are also time-consuming to generate. We have developed a projected loss function that enables training on labels with just a few annotated pixels and has no issue with the other unknown label pixels. We use this loss function for training convolutional networks with a multiresolution structure, including variants of the U-net. Our networks learn from a small number of large seismic images without creating patches. Training uses all seismic data without reserving some for validation. Only the labels are split into training/testing. We validate the accuracy of the trained network using the horizon picks that were never shown to the network. Contrary to other work on horizon tracking, we train the network to perform nonlinear regression, not classification. As such, we generate labels as the convolution of a Gaussian kernel and the known horizon locations that communicate uncertainty in the labels. The network output is the probability of the horizon location. We examine the new method on two different data sets, one for horizon extrapolation and another data set for interpolation. We found that the predictions of our methodology are accurate even in areas far from known horizon locations because our learning strategy exploits all data in large seismic images.