As AI models and data analytics workloads continue to scale, memory bandwidth and capacity have become critical bottlenecks in modern data centers. CXL provides a high-capacity, low-latency memory expansion that can be leveraged in different usage models. CXL memory expansion and Pooling can significantly enhance SQL workload performance and reduced cloud TCO, particularly for in-memory databases and analytics workloads that are bandwidth and capacity constrained. Also Offloading the key-value (KV) cache to Compute Express Link (CXL) memory is emerging as an effective strategy to tackle memory bottlenecks and improve throughput in large language model (LLM) inference serving by storing KV cache, which is critical for efficient autoregressive generation in LLMs.