Before DMA read, ideally cache should be invalidated, so that data from
memory will be updated to cache after DMA is completed. But
flush_dcache_range is being used which is incorrect. Change
flush_dcache_range to invalidate_dcache_range.
Signed-off-by: Ashok Reddy Soma <ashok.reddy.soma@amd.com>
Signed-off-by: Venkatesh Yadav Abbarapu <venkatesh.abbarapu@amd.com>
Link: https://lore.kernel.org/r/20230915031759.28889-2-venkatesh.abbarapu@amd.com
Signed-off-by: Michal Simek <michal.simek@amd.com>
writel(GQSPI_DMA_DST_I_STS_MASK, &dma_regs->dmaier);
addr = (unsigned long)buf;
size = roundup(priv->len, GQSPI_DMA_ALIGN);
- flush_dcache_range(addr, addr + size);
+ invalidate_dcache_range(addr, addr + size);
while (priv->len) {
zynqmp_qspi_calc_exp(priv, &gen_fifo_cmd);