Algorithmic assessments of personal characteristics gleaned from social networks are regularly used to rate people in fields ranging from insurance premiums, to hiring decisions and employment chances, to social security benefits. These algorithms comb through huge datasets (such as information uploaded by users on social networks) to “learn” correlations and trends between certain characteristics and to generate “people-rankings”, which systematically rate individuals based on social, reputational, physical, mental and even behavioural features. Because such algorithms equally apply to people with, and without, disabilities, they are particularly pernicious for people with disabilities. In other words, the algorithms rank persons with disabilities lower (or as less desirable) than able-bodied individuals, resulting in discrimination against those with disabilities by the public and private sector organisations that rely on such algorithms. Legislative action is needed to provide people with disabilities with legal protection from such algorithmic discrimination, regardless of whether such discrimination is purposeful or inadvertent. Because such algorithms are used across a wide variety of industries, legislation requiring that similarly situated disabled and able-bodied persons receive the same algorithmic ranking can dramatically help to improve the life quality and opportunities available for people with disabilities.
|Number of pages||26|
|Journal||Journal of International and Comparative Law|
|State||Published - 2021|
Bibliographical notePublisher Copyright:
© 2021, Sweet and Maxwell-Thomson Reuters. All rights reserved.
- Artificial intelligence
- Big data
- Digital technology
- Social media
ASJC Scopus subject areas