Impact of Fixed-Point Weight Quantization Bit-Width on MNIST Classification Accuracy

Authors

  • Ali Siddique ORCiD Department of Computer Science, University of Manchester, Manchester- M13 9PL, United Kingdom

Keywords:

ASIC, Edge AI, fixed-point, FPGA, MNIST, quantization, word length

Abstract

Background and Objective: This systematic review examines how fixed-point weight bit-width influences the classification accuracy of convolutional neural networks deployed in edge and embedded systems.

Materials and Methods: Studies evaluating uniform min–max quantization of weights across 1-32 bits were reviewed, focusing on work that isolates weight precision while keeping activations in float32 and maintaining consistent network architecture, training and evaluation procedures. Research relevant to Field Programmable Gate Array (FPGA) and Application-Specific Integrated Circuit (ASIC) implementations was prioritised.

Results: Across the literature, weight precisions of 5-6 bits consistently provide a strong balance between accuracy and hardware efficiency for MNIST-level tasks. Accuracy deteriorates below 5 bits, while higher precisions offer negligible gains relative to increased resource use.

Conclusion: Fixed-point weight bit-width is a key parameter for efficient CNN deployment in constrained hardware environments. Simple word-length sweeps offer practical guidance for selecting precision and complement existing work on hardware-centric neural network design.

Downloads

Published

2025-11-25

Issue

Section

Systematic Review

How to Cite

[1]
A. Siddique, “Impact of Fixed-Point Weight Quantization Bit-Width on MNIST Classification Accuracy”, Insights Comput. Sci., vol. 1, pp. 10–14, Nov. 2025, Accessed: Dec. 01, 2025. [Online]. Available: https://acadpub.com/ics/article/view/fixed-point-weight-quantization-mnist-accuracy