A recent paper published in the journal Engineering presents a novel approach to artificial intelligence by modeling it after how human memory functions. The research aims to overcome key limitations of current large-scale models like ChatGPT, setting the stage for more efficient and cognitively intelligent AI systems.

While large models have demonstrated impressive performance across a range of applications, they also exhibit significant shortcomings. These include high data and computational demands, susceptibility to catastrophic forgetting, and limited logical reasoning capabilities. According to the study, these issues arise from the fundamental design of artificial neural networks, their training processes, and their reliance on purely data-driven reasoning.

To overcome these challenges, the researchers propose the concept of “machine memory,” a multi-layered, distributed network storage structure that encodes external information into a machine-readable and computable format. This structure supports dynamic updates, spatiotemporal associations, and fuzzy hash access. Based on machine memory, they introduce the M2I framework, which consists of representation, learning, and reasoning modules, forming two interactive loops.

To read more, click here.