This is probably more of a philosophical debate than General/ComputerScience, but anyway, according to John Searle (1980, “Minds, Brains, and Programs”):
General/StrongAI: the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states.
General/WeakAI: computers just simulate thought, their seeming understanding isn’t real (just as-if) understanding, their seeming calculation as-if calculation, etc.; nevertheless, computer simulation is useful for studying the mind (as for studying the weather and other things).
Searle gives an argument to why General/StrongAI can never be possible with a computer, named General/TheChineseRoom. Opinions have drastically divided, not only about whether General/TheChineseRoom argument is cogent; but, among those who think it is, as to why it is; and, among those who think it is not, as to why not.
Source: http://www.utm.edu/research/iep/c/chineser.htm
Undergraduates and tenured professors can afford to ask “why?”
For all those in between, the relevant question is “how?”
Arguments in “strong AI” => strong whiff of BS
Arguments in “weak AI” => weak whiff of BS
Sorry for the “searle-y” response.