The throughput and energy efficiency of compute-centric architectures for memory intensive Deep Neural Networks (DNN) applications are limited by memory bound issues like high data-access energy, long latencies, and limited bandwidth. Processing-in-Memory (PIM) is a very promising approach to address these challenges and bridge the memory-computation gap. PIM places computational logic inside the memory to exploit minimum data movement and massive internal data parallelism. There are currently two PIM trends: 1) Use of emerging non-volatile memories to perform highly parallel analog computation of MAC operations and implicit storage of weights within the memory arrays, and 2) exploiting mature memory technologies that are enhanced by additional logic to enable efficient computation of MAC operations near the memory arrays. In this paper, we will compare both trends from an architectural perspective. Our study mainly emphasizes on FeFET memories (an emerging memory candidate) and DRAM memories (a mature memory candidate). We will highlight the major architectural constraints of these memory candidates that impact the PIM designs and their overall performance. Finally, we will assess feasible choice of candidate for different computations or DNN task types.