“Many of the technologists involved in data aggregation see a benefit to civil society,” wrote Quentin Hardy, in a New York Times column on the disappearance of urban anonymity when all data is tracked. Ethicists, researchers, and corporate compliance officers, by way of contrast, may see risks to privacy and civil rights from “big data.”

Over the years, I’ve encountered, channeled, or challenged different strains of technology-fueled optimism and pessimism regarding the impact of new technologies on society. I read a perspective that mixed some of both last week, published on the servers of MIT’s Technology Review, when Stanford fellow Vivek Wadhwa argued that laws and ethics can’t keep pace with technology.

He’s on to something important: the nature of rapid technological progress and relatively slow legislative process and regulatory rulemaking means there will almost always be a gap between technology and the law, unless parliaments preemptively move to limit certain uses or developments. The development of some technologies may move underground, offshore, or overseas, beyond such restrictions.

Wadhwa shared three examples of technological change that pose challenging legal and ethical decisions for legislatures, courts, and society, from genomic testing to surveillance and the use of smartphones; he also looked ahead to how self-driving cars, drones, and robots will pose new legal and ethical issues. He’s mining a rich vein of material: the introduction of new technologies into society will put individuals, organizations, and governments in situations where they will need to make choices that are novel to them.

Wadhwa made two assertions in his column that complicate his thesis a bit, with respect to the laws governing the use of genetic information or social media. An earlier version of the article stated that there was “no law” governing the use of genetic information. That’s not quite accurate, as the updated column reflects, but the Supreme Court has ruled (PDF) that law enforcement officers can take DNA evidence from people arrested — not convicted — of serious crimes. Similarly, doctors may take tissue from patients in hospitals and subsequently use it or the genetic information on it for research, without the informed consent of the patient. The issues around patenting genetic information are even more complex.

In his column, Wadhwa also stated that employers “can use social media to filter out job applicants based on their beliefs, looks, and habits,” even though there’s clear guidance from the U.S. Equal Employment Opportunity Commission that social media may not “be used to make employment decisions on prohibited bases.”

The overall point Wadhwa made, however, is sound: despite regulatory guidance to the contrary, a growing number of employers are using search engines and social media to look up applicants as part of the hiring process and may be using what they find to discriminate. After winnowing down an applicant pool to the people who meet baseline qualifications, how many recruiters or hiring managers in 2014 will not Google them or search for their Facebook or LinkedIn profiles? And, after doing so, how many will not be influenced by what they find? The answer seems uncomfortably clear.

Where I found myself disagreeing with Wadhwa most, however, is not on details but rather the question of whether we see societal shifts in ethics due to exponential technological change. I define ethics generally as the moral principles that govern a person or group’s behavior in a given context. While it’s true that what we see as moral exists within a given culture or time, does that baseline shift because of technology? Did societal progress with respect to the abolition of slavery, suffrage, or civil rights in the US occur because of technology, or in the context of it?

Each generation born into the 20th century has had new technologies and tools to learn to use ethically, from telephones to smartphones to drones and genetic tests. With each new technology, society has had to grapple with unethical uses and find new norms. Each ethical decision is made in context of the society it is made within, with a given technology acting as a tool or enabler of an action.

Wadhwa is doing us all a public service by highlighting these challenges. Businesses are grappling with the ethical issues posed by massive data aggregation and data analysis, looking for the right balance between increased risk with innovation. Governments, schools, research labs, and anyone else using these new tools for data storage, processing, and analysis will as well. As more members of the media practice data-driven journalism, they also have to make ethical decisions about the use of public records, from identifying at-risk individuals within datasets to publicizing arrests to potential harms to national security or personal privacy.

One relevant question, then, is: does technological progress change an absolute condition? For instance, does killing become ethical if a given technology, like drones, makes it easier? Even if our laws or interpretations of our laws may lag the use of “flying death robots” for extrajudicial killing of an American citizen without due process, legal ethicists may still reasonably question the morality of that action by the state, or government opacity around its use.

Or, to take another uncomfortable example raised by Wadhwa, does the use of less expensive ultrasound devices to determine the sex of fetuses change the underlying ethical questions that parents face regarding whether to carry a baby to term?

I suspect that Wadhwa and I could agree that stealing, murder, assault, slander, or fraud are all unethical. Do new technologies really change our present social compacts or the ethics involved? Are there ethical choices that will continue to be clearly flawed, regardless of tech change?

These are truly difficult questions, with no easy answers. I think ethics speak to a higher order of standards for decision-making that is more resistant to change. What that means in practice is that we’ll need to learn how to apply existing ethics and moral values to novel situations that technology presents.

For instance, is the use of an individual’s genomic information for research or profit without her consent or that of her family ethical? A recent bioethics controversy regarding the use of Henrietta Lacks’ DNA for decades of research resulted in a landmark decision by the National Institutes of Health to give her family some control over how her genome is used.

It’s worth noting that the Lacks case involved the use of data without consent. A focus on usage will be critical if an acceleration in data collection and creation around the world continues. So, too, will asking hard questions about big data, as danah boyd and Kate Crawford did in 2011. Almost two years ago, my colleague Alistair Croll wrote that big data is our generation’s civil rights issue, and we don’t even know it.”

Today, there is finally an international conversation about the related issues of power and influence around the collection and use of data. New scholarship on big data ethics is emerging from academia. Civil rights groups are warning against abuse of big data, and the White House is conducting a review of big data and privacy.

On the latter count, I participated in the second big data and privacy workshop pursuant to that review this spring, delivering a short talk ondata journalism, networked transparency, algorithmic transparency, and the public interest at the Data & Society Research Institute’s workshop on the social, cultural and ethical dimensions of “big data.” The forum was convened by the Data & Society Research Institute and hosted at New York University’s Information Law Institute at the White House Office of Science and Technology Policy.

The selection of talks should offer considerable context for these issues, as will the forthcoming review from the White House this month, and another, separate assessment from the President’s Council of Advisors on Science and Technology later this year.