The Impact of Generative AI on Personalized Content Marketing in E-Commerce
DOI:
https://doi.org/10.63544/ijss.v4i1.288Keywords:
Generative AI, Personalization, E-Commerce, Content Marketing, Consumer Trust, LLM, Marketing Ethics, AuthenticityAbstract
This research investigates the effectiveness of Generative Artificial Intelligence (GenAI) in creating hyper-personalized marketing content compared to traditional rule-based personalization. While GenAI offers unprecedented scale, empirical evidence regarding its impact on consumer behavior and ethical perception remains limited.
Using a sequential mixed-methods design, a randomized A/B experiment (N = 1,140) compared GenAI-driven personalization (GPT-4 and DALL-E 3) against traditional rule-based methods in a simulated retail environment. Post-experiment surveys (n = 480) assessed perceived authenticity, privacy concerns, and AI detection accuracy. Qualitative insights were gathered through semi-structured interviews with six marketing managers from three e-commerce brands.
GenAI personalization significantly outperformed rule-based methods, increasing click-through rates by 35.2% (24.6% vs. 18.2%; $p = 0.005$) and conversion rates by 38.1% (11.6% vs. 8.4%; p = 0.032). Time-on-site increased by 16.3 seconds (Cohen’s d = 0.49). However, AI disclosure negatively impacted brand authenticity (d = -0.64), trust (d = -0.65), and purchase intention ($d = -0.55$). While 77.5% of consumers desired an opt-out setting for AI personalization, they were unable to reliably distinguish AI from human content (accuracy = 53.3%). Qualitative data highlighted "human-in-the-loop" processes as essential to mitigating hallucination risks.
The study contributes the first direct empirical benchmark of GenAI over rule-based personalization in e-commerce. It identifies a "detection-disclosure paradox"—where consumers cannot recognize AI content but remain suspicious when it is disclosed and proposes a Transparent Hybrid Model to balance operational efficiency with ethical protection and brand trust.
References
Aguirre, E., Mahr, D., Grewal, D., de Ruyter, K., & Wetzels, M. (2015). Unraveling the personalization paradox: The effect of information collection and trust-building strategies on online advertisement effectiveness. Journal of Retailing, 91(1), 34–49. https://doi.org/10.1016/j.jretai.2014.09.005
Ansari, A., & Mela, C. F. (2003). E-customization. Journal of Marketing Research, 40(2), 131–145. https://doi.org/10.1509/jmkr.40.2.131.19224
Asif, M., & Sandhu, M. S. (2023). Social media marketing revolution in Pakistan: A study of its adoption and impact on business performance. Journal of Business Insight and Innovation, 2(2), 67–77.
Asif, M., & Shaheen, A. (2022). Creating a high-performance workplace by the determination of importance of job satisfaction, employee engagement, and leadership. Journal of Business Insight and Innovation, 1(2), 9–15.
Awad, N. F., & Krishnan, M. S. (2006). The personalization privacy paradox: An empirical evaluation of information transparency and the willingness to be profiled online for personalization. MIS Quarterly, 30(1), 13–28. https://doi.org/10.2307/25148715
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21) (pp. 610–623). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922
Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21–34. https://doi.org/10.1016/j.cognition.2018.08.003
Bond-Taylor, S., Leach, A., Long, Y., & Willcocks, C. G. (2022). Deep generative modelling: A comparative review of VAEs, GANs, normalizing flows, energy-based and autoregressive models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11), 7327–7347. https://doi.org/10.1109/TPAMI.2021.3116668
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
Burke, R. (2002). Hybrid recommender systems: Survey and experiments. User Modeling and User-Adapted Interaction, 12(4), 331–370. https://doi.org/10.1023/A:1021240730564
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825. https://doi.org/10.1177/0022243719851788
Chandra, S., Verma, S., Lim, W. M., Kumar, S., & Donthu, N. (2022). Personalization in personalized marketing: Trends and ways forward. Psychology & Marketing, 39(8), 1529–1562. https://doi.org/10.1002/mar.21670
Creswell, J. W., & Plano Clark, V. L. (2017). Designing and conducting mixed methods research (3rd ed.). SAGE Publications.
Davenport, T., Guha, A., Grewal, D., & Bressgott, T. (2020). How artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science, 48(1), 24–42. https://doi.org/10.1007/s11747-019-00696-0
Davenport, T. H., Guha, A., & Grewal, D. (2023). Generative AI in marketing: Early adopters and emerging use cases. Harvard Business Review, 101(5), 56–65.
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126. https://doi.org/10.1037/xge0000033
Elder, R. S., & Krishna, A. (2012). The “visual depiction effect” in advertising: Facilitating embodied mental simulation through product orientation. Journal of Consumer Research, 38(6), 988–1003. https://doi.org/10.1086/661531
European Commission. (2024). Regulation (EU) 2024/1688 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union.
Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G-Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. https://doi.org/10.3758/BRM.41.4.1149
Ferrara, E. (2023). Should ChatGPT be biased? Challenges and risks of generative artificial intelligence for social science. arXiv. https://doi.org/10.48550/arXiv.2304.03738
Granulo, A., Fuchs, C., & Puntoni, S. (2021). Preference for human (vs. robotic) labor is stronger in symbolic consumption contexts. Journal of Consumer Psychology, 31(1), 72–80. https://doi.org/10.1002/jcpy.1137
Hayes, A. F. (2018). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (2nd ed.). Guilford Press.
Huang, M. H., & Rust, R. T. (2021). A strategic framework for artificial intelligence in marketing. Journal of the Academy of Marketing Science, 49(1), 30–50. https://doi.org/10.1007/s11747-020-00749-9
Ippolito, D., Duckworth, D., Callison-Burch, C., & Eck, D. (2020). Automatic detection of generated text is easiest when humans are fooled. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 1808–1822). https://doi.org/10.18653/v1/2020.acl-main.164
Jakesch, M., French, M., Ma, X., Hancock, J. T., & Naaman, M. (2019). AI-mediated communication: How the perception that profile text was written by AI affects trustworthiness. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Paper No. 396, pp. 1–13). https://doi.org/10.1145/3290605.3300626
Jarek, K., & Mazurek, G. (2023). Generative AI in marketing: A new era of content creation. Procedia Computer Science, 225, 1234–1243. https://doi.org/10.1016/j.procs.2023.10.109
Ji, Z., et al. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), Article 248. https://doi.org/10.1145/3571730
Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53–71. https://doi.org/10.1207/S15327566IJCE0401_04
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
Longoni, C., & Cian, L. (2022). Artificial intelligence in utilitarian vs. hedonic contexts: The “AI–content gap.” Journal of Marketing, 86(1), 91–108. https://doi.org/10.1177/00222429211030287
Lops, P., de Gemmis, M., & Semeraro, G. (2011). Content-based recommender systems: State of the art and trends. In F. Ricci et al. (Eds.), Recommender systems handbook (pp. 73–105). Springer. https://doi.org/10.1007/978-0-387-85820-3_3
Malhotra, N. K., Kim, S. S., & Agarwal, J. (2004). Internet users’ information privacy concerns (IUIPC): The construct, the scale, and a causal model. Information Systems Research, 15(4), 336–355. https://doi.org/10.1287/isre.1040.0032
Mei, Q., Xie, Y., Yuan, W., & Jackson, M. O. (2024). A Turing test of whether AI chatbots are behaviorally indistinguishable from humans. Proceedings of the National Academy of Sciences, 121(12), e2313924121. https://doi.org/10.1073/pnas.2313924121
Mohiuddin, D., & Farhan, D. N. (2025). Artificial Intelligence in Marketing: Ethical Challenges and Solutions for Consumers and Society. Journal of Business Insight and Innovation, 4(1), 73–87.
Mohiuddin, D. (2024a). Algorithmic Hyper-Personalization: The Double-Edged Sword of Predictive Personalization - An Empirical Investigation. Journal of Engineering and Computational Intelligence Review, 2(2), 82-94.
Mohiuddin, D. (2024b). Consumer Perceptions and Trust in AI-Generated Advertising: An Experimental Study in the Pakistani Context. Apex Journal of Social Sciences, 3(1), 53-68.
Montgomery, A. L., & Smith, M. D. (2009). Prospects for personalization on the internet. Journal of Interactive Marketing, 23(2), 130–137. https://doi.org/10.1016/j.intmar.2009.02.001
Morhart, F., Malär, L., Guèvremont, A., Girardin, F., & Grohmann, B. (2015). Brand authenticity: An integrative framework and measurement scale. Journal of Consumer Psychology, 25(2), 200–218. https://doi.org/10.1016/j.jcps.2014.11.006
Mumtaz, A., Munir, N., Mumtaz, R., Farooq, M., & Asif, M. (2023). Impact of psychological & economic factors on investment decision-making in Pakistan stock exchange. Journal of Positive School Psychology, 7(4), 130–135.
Peres, R., Schreier, M., Schweidel, D., & Sorescu, A. (2023). The impact of generative AI on marketing: A framework for research and practice. International Journal of Research in Marketing, 40(4), 789–801. https://doi.org/10.1016/j.ijresmar.2023.03.001
R Core Team. (2023). R: A language and environment for statistical computing (Version 4.3) [Computer software]. R Foundation for Statistical Computing. https://www.R-project.org/
Ricci, F., Rokach, L., & Shapira, B. (2015). Recommender systems: Introduction and challenges. In F. Ricci et al. (Eds.), Recommender systems handbook (2nd ed., pp. 1–34). Springer. https://doi.org/10.1007/978-1-4899-7637-6_1
Smith, B., & Linden, G. (2017). Two decades of recommender systems at Amazon.com. IEEE Internet Computing, 21(3), 12–18. https://doi.org/10.1109/MIC.2017.72
Sundar, S. S., & Marathe, S. S. (2010). Personalization versus customization: The importance of agency, privacy, and power usage. Human Communication Research, 36(3), 298–322. https://doi.org/10.1111/j.1468-2958.2010.01377.x
Tam, K. Y., & Ho, S. Y. (2006). Understanding the impact of web personalization on user information processing and decision outcomes. MIS Quarterly, 30(4), 865–890. https://doi.org/10.2307/25148757
Weidinger, L., et al. (2022). Taxonomy of risks posed by language models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22) (pp. 214–229). https://doi.org/10.1145/3531146.3533088
Yoo, B., Donthu, N., & Lee, S. (2000). An examination of selected marketing mix elements and brand equity. Journal of the Academy of Marketing Science, 28(2), 195–211. https://doi.org/10.1177/0092070300282002
Downloads
Published
How to Cite
Issue
Section
Categories
License
Copyright (c) 2025 Danyal Mohiuddin, Muhammad Hamza Tariq, Areej Tahir

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The work is concurrently licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, which permits others to share the work with an acknowledgement of the authorship and the work's original publication in this journal, while the authors retain copyright and grant the journal the right of first publication.