Open post

Machine Learning Bias in Application

3 Things You Must Start Doing When You Become a Team Leader

Machine learning is powerful because it is taught to respond to data it has never seen before. Through a process of statistical analysis called “training”, these models can pick up on patterns in data sets and apply those motifs to completely new examples.

A famous example of machine learning in action is its application on the EMNIST database. The EMNIST database is a collection of human-drawn digits (0-9) that has been widely used in training image classification programs.

Essentially, a computer can learn how to recognize and classify imperfect, handwritten numbers from this database. After such a model has been effectively trained, the program should be able to classify a fresh example it has never seen before.

This is what makes AI and machine learning so powerful and likely to revolutionize technology as we know it. However, this asset is also potentially dangerous in at-scale applications.

Trained on Facebook posts and comments, Meta’s new AI chatbot “can convincingly mimic how humans speak on the internet,” according to CNN. However, as we all know by now, some people on Facebook can say some questionable things.

Not only did the chatbot recite many conspiracy theories propagated on the platform, but it also attacked the company that created it. Vice reports the bot even parroted “since deleting Facebook my life has been much better.”

In other examples, racial biases in datasets are amplified through the training process and skew the model.

Fundamentally, AI’s greatest strength leads to inherent unpredictability in how it might respond to data. From a marketing standpoint or from an ethical one, it’s necessary to really spend time understanding the potential biases of a dataset before using it to train any model.


Job Hunting? Let us help! - Search Jobs Now


Open post Tiny Machine Learning - Bringing AI to the Internet of Things

Tiny Machine Learning: Bringing AI to the Internet of Things

3 Things You Must Start Doing When You Become a Team Leader

Tiny Machine Learning: Bringing AI to the Internet of Things

Machine learning (ML) and artificial intelligence (AI) are at the forefront of technological development. As a technology, ML is inherently flexible and has the potential to revolutionize industries and the world. Many of the potential applications of ML exist on small, low voltage circuits. Tiny Machine Learning or tinyML, as it is referred to, is a sub-field of machine learning. TinyML is where machine learning and embedded internet of things (IoT) collide. This emerging technology has the potential to be revolutionary for multiple industries.

The tinyML Foundation describes its discipline as the following:

“a fast-growing field of machine learning technologies and applications including hardware, algorithms, and software capable of performing on-device sensor data analytics at extremely low power, typically in the milli-watt range and below, and hence enabling a variety of always-on use-cases and targeting battery-operated devices.”

In previous articles, we’ve extensively discussed the Internet of Things (IoT) - both how to keep IoT devices safe and how 5G and the IoT have the potential to create smart cities. IoT devices are some of the most obvious and exciting applications for tinyML. While these devices often aren’t battery-run, their ideal is to dissipate as little power as possible to decrease the cost-over-time of one’s device.

For reference, a typical household lightbulb dissipates 60W of power meaning these machine learning circuits operate extremely cheaply. These devices often involve some degree of ML. And with advanced tinyML technology, both the scope and efficiency of these devices could be improved. Alongside the development of battery technologies, many of these devices may eventually be wireless, battery-run, and portable.

While IoT devices seem like the obvious first step for tinyML, countless other applications are possible. The tinyML foundation presents many potential applications on its website. For example, they show how tinyML technology could make agriculture more water-efficient and sustainable.

The author, Ravi Rao, argues “for the implementation of the precision agriculture model, connecting all the devices to the network and passing data to the cloud is not always feasible … using microcontrollers interfaced with moisture sensors and water control valves, it is possible to implement a simple automatic irrigation system that turns the irrigation system on or off depending on a static value of soil moisture levels.”

TinyML has the possibility not only to revolutionize our consumer devices but could help promote sustainability and has the potential for many more applications. The key benefit of tinyML is its efficiency and decentralized nature. While current ML systems send data to the cloud for processing, tinyML systems can perform data analysis on-device. This makes for less data flow, more efficient applications, and a world of possibilities.


Job Hunting? Let us help! - Search Jobs Now


Open post 3D rendered depiction of a digital avatar. Text: Artificial Intelligence and Advancing Synthetic Media (Deep Fakes)

Artificial Intelligence and Advancing Synthetic Media (Deep Fakes)

3D rendered depiction of a digital avatar. Text: Artificial Intelligence and Advancing Synthetic Media (Deep Fakes)

Artificial Intelligence

Artificial intelligence (AI) is, by far, one of the most vital technologies currently being developed. As we have discussed in previous articles regarding Artificial Intelligence regulation and the Internet of Behaviors (IoB), AI has a lot of potential, both helpful and dangerous. Modern AI systems primarily run on a process called deep learning. The model takes, as input, several numerical values and after hidden, weighted analysis returns a number of numerical outputs. In a process called training, the model uses mathematics to guess better weights in the invisible steps, eventually improving the outputs.

Deep Fakes

Recently, there’s been a lot of discussion around an application of deep learning called synthetic learning or deep fakes. A highly complex deep learning system trains itself, using videos or pictures of a person’s face and mannerisms, to essentially photoshop one person’s face onto another’s body within a video. It can use the same technique to synthesize a voice and imitate a specific person. This technology has a lot of potentials to be dangerous and is, at least, morally suspect. In the future, misinformation can easily be spread by faking a politician’s speech - even civilians could be at risk.

The Risks

In a recent episode of 60 Minutes, journalist Bill Whitaker highlighted the risks of deep learning. He volunteered himself to show just how natural the end-product can be. Whitaker also discussed the potential applications mentioned above as a vector of misinformation. Machine learning holds the potential to push our already polarized society over the edge. Some of these concerns, though, are not just science fiction. A recent Forbes article reported that criminals have used deep fake technology to impersonate a bank director’s voice, in turn, stealing over $35 million.

The Spread of Misinformation

Deep fakes are already posing serious threats to companies and governments. However, as technology constantly improves, its realism, and hence its threat, will continuously increase. It may be impossible to distinguish actual film or audio from deep fake media in a few years, which could cause misinformation to spread like wildfire on social media. As we move forward, it may be critical to regulate deep fake development, even if it may potentially infringe on individual freedoms.

The Future

Deep fakes are becoming more and more realistic, a fact threatening the private sector, government systems, and even our society. Although deep learning is potentially one of the most valuable technologies to date, there should also be cause for concern. There is potential good that could come from deep learning with regards to medical and informational applications. However, as with any new technology, in the wrong hands it could be used nefariously. In the coming years, it may be necessary to control deep fake development to preserve our society’s trust in virtual media as a whole. The precaution to be taken by the public is simply knowing of the existence of deep fakes, questioning, and not taking what is seen at face value.

Open post Digital brain with a localized memory of information. Text: The Future of Marketing and Software - Internet of Behaviors (IoB)

The Future of Marketing and Software – Internet of Behaviors (IoB)

3 Things You Must Start Doing When You Become a Team Leader

If you are an avid follower of Quardev Thoughts, you will have no doubt heard of the Internet of Things (IoT). In essence, it’s a network created by connecting everyday electronics, refrigerators, thermostats, etc., to the internet. The IoT has been ever-expanding over the past couple of years. And it will undoubtedly continue to grow as more convenient products fly off the shelves. As a result of IoT, an additional network and system currently developing is the Internet of Behaviors (IoB). Just as the IoT is a network of physical products sharing data and being remotely controlled, in the future, IoB will analyze behaviors through the lens of human psychology. It can then use their analysis to control marketing and IoT gadgets and influence consumer actions or optimize the performance of hardware.

The Exchange of Data

A recent Forbes article discussing the IoB came to the astute conclusion that “The IoT revolution was driven by hardware – connected devices that exchange data over the internet – but software that integrates this data will give rise to the IoB.” For example, your smart-thermostat could communicate with your smartwatch, adjusting the indoor temperature to optimize energy and keep you comfortable according to your vitals. This is just one example of how the IoB could be seriously impactful, but there’s one massive problem: data.

Current Challenges

Inherently, IoB systems would have to synthesize datasets from many different sources to make suggestions to marketers and IoT networks. However, teams that work on IoT systems are separated, both between companies and within siloed organizations. It’s incredibly difficult, right now, for a developer to synthesize all of this data and use it all together. As we move into the future, agile, intermixed teams will be vital in developing IoB networks. Currently, companies like Apple are most poised to employ such techniques.

Apple, for example, has a sizable immersive ecosystem of products that draw in consumers. They could use this to their benefit and develop robust IoB systems that employ all of their products working in unison. Other, smaller companies may need to work together to build such an ecosystem. For example, a smart-lighting company could work with a phone company to suggest adjusting your lights-out time if you must wake up early the following day.

Future Considerations

The key to building IoB systems will be collaboration and data. Corporations will need to adjust their teams to be less siloed and more agile, allowing for more holistic integration. Additionally, IoB systems will likely be the future of both software and marketing. If data is shared more readily, it will undoubtedly result in more accurate marketing, using an individual’s day-to-day behavior. However, as with any new technology, there are privacy and ethical concerns we must consider. Securing data seems to be a challenge currently. So clearly, a different system is needed if you would like to feel safe indulging in these futuristic luxuries.

Scroll to top
Close Bitnami banner