The Chinese government is tightening regulations on the development of artificial intelligence (AI). The new rules, which were issued by the National Development and Reform Commission (NDRC) in December 2022, aim to ensure that AI is developed in a safe and ethical manner.
The NDRC regulations cover a wide range of issues, including the collection and use of personal data, the development of autonomous vehicles, and the use of AI in the financial sector. The regulations also require AI developers to conduct risk assessments and to put in place safeguards to protect users’ privacy and security.
The tightening of AI regulations in China is a sign of the government’s growing concerns about the potential risks of AI. In recent years, there have been a number of high-profile incidents involving the misuse of AI, such as the use of facial recognition technology to track Uyghur Muslims in Xinjiang province.
The NDRC regulations are intended to address these concerns and to ensure that AI is developed in a responsible and ethical manner. The regulations are also likely to have a significant impact on the global AI industry, as many of the world’s leading AI companies are based in China.
The following are some of the key provisions of the NDRC regulations on AI:
- AI developers must obtain government approval before collecting or using personal data.
- AI developers must conduct risk assessments and put in place safeguards to protect users’ privacy and security.
- The development of autonomous vehicles must be subject to strict safety standards.
- The use of AI in the financial sector must be subject to regulatory oversight.
The NDRC regulations are a significant development in the global AI landscape. They are a sign of the Chinese government’s growing concerns about the potential risks of AI, and they are likely to have a significant impact on the global AI industry.