Date(s) - 09/08/2017
3:30 pm - 4:00 pm
Abstract : Recent popularity of mobile devices increased the demand for mobile network services and applications that require minimal delay. 5G mobile networks are expected to provide much lesser delay than the present mobile networks. One of the conventional ways for decreasing latency is caching content closer to end users. However, currently deployed methods are not effective enough. In this thesis, we propose a new astute caching strategy that is able to smartly predict subsequent user requests and prefetch necessary contents to remarkably decrease the end-to-end latency in 5G systems. We employ semantic inference by mobile edge computing, deduce what the end-user may request in the sequel and prefetch the content. We validate the proposed technique by emulations and compare it with the state of the art.