From Symbol to Meaning: Ontological and Philosophical Reflections on Large Language Models in Information Systems Engineering
The advent of Large Language Models (LLMs) represents a turning point in the theoretical foundations of Information Systems Engineering. Beyond their technical significance, LLMs challenge the ontological, epistemological, and semiotic assumptions th…
Authors: José Palazzo Moreira de Oliveira
From Symbol t o Meaning: On tological and Philosoph ical Reflections on Large Language M odels in Inf ormation Systems Engineer ing José Palazzo Mor eira de Olive ira 1 [ 0000-0002-9166-8801 ] 1 Federal University of RGS, Instituto de Informática, Porto Alegre, Brazil palazzo@inf.ufrgs.br Abstract. The advent of Large Language Models (LLMs) represents a t urning point in t he theoretical foundations of Information Systems Engineering. Bey ond their technical significance, LLMs challenge the ontological, ep istemological, and se miotic assumptions that have long structu red our u nderstanding of infor- mation, representation, an d knowledge. T his article proposes an integrative re- flection on how LLMs reconfigure the relationships among language, meaning, and system design, suggesting that their e mergence demands a re-examination of the conceptual foundations of co ntemporary information systems. Sketching on philosophical traditions from Peirce to Heidegger and Floridi, we investigate how the logic of g enerative models b oth extends and destabilises classical notions of ontology and signification. The discussion e mphasises the necessity of gr ounding LLM -based systems in tran sparent, ethically coherent frameworks that respect the integrity o f human-centred knowledge processes. Ultimately, th e paper ar- gues that LLMs should be understood not merely as tools for automation but as epistemic agents that reshape the philosophical a nd semiotic foundations of in- formation systems engineering. Keywords: Large Language Models; Ontology; Semiotics; Epistemology; In- formation Systems Engineering; Philosophy of Information; Ethics of AI. 1 Introduction The title From Symbol to Mean ing: Ontologica l and Philosophical Ref lections on Large Language Mod els in Informat ion Sys tems Engine ering emphas i ses the transition from symbol to meaning by drawing on the perspec t ive of Semi- otics, in which a symbol is understood as a sign that represents something within a system of interpretation rather than possess ing inherent meaning. I n this sense, large langua ge models operate primarily o n symbolic structure s, to- kens, words, and syntactic patterns, while the notion of me aning emerges only when these symbols are interpr eted within a conceptual and onto logical con- text. The title, therefore, sig nals a reflection on how computation al systems ma- nipulate symbolic representa tions and how these representations relate to 2 J. Palazzo M. de Oliveira semantic unders tanding and do main knowledge w i thin the foundations of In- formation Syste ms Engineering. The field of Inform ation Systems Engineering (ISE) ha s und er gone profound transformation s since its in ception, shaped by both aca demic inquiry and prac- tical experience. Originatin g fro m the need to address complex organisa tional problems, ISE e merged as a respons e to the challenges of man aging resource s, processes, and communication within increasingly i ndustrialised societies . Early ur ban cen tres al ready required co or dinated systems for logistics, storage, and security, laying the concept ual foundat ions for modern i nformation sys- tems. The ethn ogr aphic dimension of I SE, crucial for understanding user context s, has its intellec tual roots in the work of Bronisław Malinowsk i (Malinowsk i 1929), who introduc ed t he concept o f pa rticipant observation in the early twen- tieth century. H is em phasis on imm ersive fieldwork established th e methodo- logical p r inciple that a comprehensive understanding of a sy stem re quires de ep engagement with the environment i n which it op erates. This in sight re mains central to ISE practice: effective sys tem design de pends on an interpreta tive understanding of human behaviour and organ i sational culture. Within this broader perspective, the ethnographic dimension of Information Systems Engineering illustrates how the f ield can benefit from intellectual ex- changes across multiple areas of knowledge, thereby preventing the formation of isolated disciplinary silos. The ethnographic approach, rooted in B ronisław Malinowski's concept of participant observa tion, demonstrates that understand- ing information sy st ems re qui res not onl y technical o r formal modelling per- spectives but also inte rpretative insights into hum an practices and organi sa- tional co ntexts. By incorporating perspectives from E thnography and related social sciences, res earchers from d ifferent domains can contribute complemen- tary viewpoints to the s t udy and design of systems. Such interdisciplinary en- gagement fo sters a m ore integ rated under standing of technological artefac t s, human behaviour, and institutional environments, encouraging collaboration rather than frag mentation within the broader resear ch landscape. In parallel, the systemic dimension of this interpretat ive approach is grounded in the work of Lu dwig von Bertalanffy (Von Bertalanffy 1972), one of the principal architects of General Systems Theory. His vision extended be- yond the boundaries of biology, proposing a holistic framework that could in- tegrate knowledge across d iverse fie lds, including cybernetics, ps ychology, so- ciology, and education. Within the context of Information Syste ms Engine er- ing, this perspe ctive reinfo r ces the ide a that syste ms cannot b e understood i n isolation but only through the dynamic interrelations a m ong their components and their env ironment. By al igning Malinow ski's ethn ogr aphic se nsitivity with Bertalanffy's systemic abstr action, ISE achieves both depth and breadth, com- bining human-centred inquiry with st ructural coheren ce. This synthe sis From Symbol to Meaning 3 supports the d evelopment of systems t hat are not only technic ally robust but also attuned to the complex social ec ologies in which they a r e embedded. As ISE evolved into a d istinct discipline, scholar s rec ognised the need for interdisciplinary in tegration, combining per spectives from sociology, compu t er science, management, and engineering (Wangler and Backlund 2005). This convergence proved essential for addressing t he ill -structured and dynamic problems characteris tic of contemporary information systems. The accelerated development of d i gital tec hnologies and t he grow ing p revalence of da t a-driven decision-making have intensified the dem and for a robust, ad aptable theoretical framework that can accommodate rap id technological and societal change. Today, th e practica l conseq uences of recent transformations demonstrate tha t Information Sys t ems Eng i neering (ISE) f aces an urge nt need to rev i se its t heo- retical, methodological, and operational foundations to align with contempo- rary epistemic and technological realities. From a broader Systems Engineerin g perspective, t his revision implies expanding beyond tradi tional hard systems approaches tow ard a mor e comprehens ive integra tion of soc ial, organisational , and human dimens ions. Contemporary system s are no l onger closed technica l constr ucts, but soci- otechnical ecosystems characterised by complexity, uncertainty, and emergen t behaviour. Consequ ently, the epistemological basis of Systems Engineering must recognise th at valid know ledge includes not only structura l models, but also contextual underst anding shaped by multiple s takeholders. Methodologi- cally, this requires comb i ning quantitative and qual itative methods, i ntegra ting hard and soft systems approaches, an d ado pting iterat ive, adaptive p rocesses suited to dynamic environments. Operationally, systems must be des igned for i nteroperability, resilience, and continuous evolution within interconnect ed networks. For Information Systems Engineering, this broader framework redefines its mi ssion: it must encompass not only da ta and compu tation but also gov ernance, ethics, and human ag ency. Ultimately, t he r enewal of ISE’s fo undations represents a necessary paradigm shift, ensu ring its cap acity to r espond effectively to the complex and interde- pendent system s of the twenty-first ce ntury (Autili, et al . 2025). 2 Philosophy of Science and Computing The philosophy of science, hi storically dedi cated to the critical exam ination of scientific practice, has alwa ys sought to clarify how knowledge is constru cted, validated, and communicated. In times of epist emic uncertainty and socia l transformation, i ts relevance becomes even g reater. It reminds us that sc ience is not only a collec tion of verified fact s, but also a human ent erprise structure d by assumptions, values, and interpretive frameworks. Sci ence, as discussed in the wr itings o f Popp er (Po pper 1959), Lakatos (La katos 1976), Kuhn (Kuhn 4 J. Palazzo M. de Oliveira 1962), and Feyerabend (Feyerabend 1975), is not a construct immune to flaws or neutral in its assu mptions. It is a hu man, collect ive, and situated activi ty, subject to revis ions, controversies , and even paradig matic ruptures. The philosophy of science (Vallor 2022), historically devoted to examining how knowle dge is constructed and validated, gains r enew ed signifi cance in the age of computing. As thinkers have shown , science is not a neutral or infallible enterprise, but a human and collective activity shaped by assumptions, values, and interpretive frameworks. I n parallel, computing, though a relat ively young discipline, has em erged as a transfor mative force that r edefines no t only scien- tific practice but also our modes of reasoning and communication. The rise of ar tificial i ntelligenc e, predictive modelling, and data-driven automation ex- tends the epis temological a nd ethical questions once posed to science into new domains, com pelling us to r econsider th e very n ature of explanation, c ausality, and responsibility in an increasingly algorith mic world. Computing, although a relatively young discipline, has become one of the central forces shaping cont emporary society. It not onl y produces tools and al- gorithms but redefines our very modes o f reason ing, communication, and inter- action with t he world. Technologies such a s artificial intelligence, predictive models, and da t a-driven a utomation now influence d ecisions that impact eve ry aspect of human life, including education, healthcare, governance, and the economy. This g r owing influence brings to the fo refront philosophi cal ques- tions about the nature of explanation, causality, representation, and e thical re- sponsibility. The rapid expansion of computational systems has also revealed deep ten- sions. The so-called black-box nature of many algorithms makes them opaque to scrutiny. D ecisions are often delega ted to s ystems whose internal logic re- mains inaccessible even to their designers (Bath ae e 2018). Thi s opacity gives rise to a crisis of trust: when results are accepted simply because t hey are pro- duced b y “intelligent” systems, science ri sks becoming an act of fa ith ra ther than of reason. The philosophy of science thus calls for renewed demands for epistemic transparency, as well as for criteria of validity, interpretability, and accountability th at extend beyond e fficiency or per formance metrics. From this perspective, philosophy provides es sentia l conceptua l tools t o re- orient computing toward reflexivity and r esponsibility. Fir st, it requires science to remain self-critical, aware of it s presupposit ions, and open t o revision. The scientific process should be under stood not as t he pu r suit of i nfallibl e t ruth, but as a sy stematic effort to approac h un derstanding t hrou gh doub t, experimenta- tion, and argume ntation. Second, the scientist or, in this case, the computer scientist, must be seen a s an agent responsible not on l y for technical correctness but also for the episte- mological, social, and ethi cal implications of hi s or her work. Th e d esign of algorithms, data models, and compu t ational infrastructures i nvolves choices that encode values: what is measured, what is ignored, who benefits, and who From Symbol to Meaning 5 is excluded. These are not m erely t echnica l decisions but philosophical one s, grounded in conceptions of truth, fairness, and knowledge (Mittelstadt, et al. 2016). Third, ph i losophy reminds us tha t sc ientific kn owledge does not exist in iso- lation. It must be in dialogue with other forms of knowl edge , including e thical, social, historical, and aesth et ic perspectives, bec ause science operates with in a complex and dynamic society. Thi s dialogue i s espec ially i mportant in compu- ting, where technological artefacts directly influence h uman relationships, cog- nitive processes, a nd cultural for mations. Ultimately, t he ph ilosophy of science (Vallor 2022) invites compu ting to re- discover its moral and ep istemic commitments. In a world i ncreasingly gov- erned by algorithms, the cha llenge i s to ens ure that computational systems serve truth, justice, and human flourishing rather than merely optimise or control ou t- comes. Bridging philosophy and computing is, therefore, not an abstract intel- lectual exercise, but an urg ent necessity for the i ntegrity of science and for the future of human society. Information Systems Eng ineering ( ISE) has always been si tuated a t the in- tersection of t echnology and concep t ual r igour. Its foundations, ontolog ical, philosophical, semiotic, and mathematical, constitute a multidimensional framework fo r understanding how knowledge is repre se nted and operational- ised. The recent p roliferation of LLM s i ntroduce s new questions to this frame- work: What kind of ontology do these models instantiate? How do they redefine the bounda ries betwe en symbol and meaning, data and knowledge, syntax, an d semantics? 3 Ontological Foundations: From Conceptual Models to Generative Ontologies Failures in information systems developm ent are often linked to inadequate or flawed methodologi es, particularly those involving conc eptual modelling , which serves multiple purposes in the de velopment process . Critiques of such methods commonly hi ghlight their weak t heoretical gr oundi ng, prompting sev- eral efforts to establish s tronger f oundations based on dif ferent refer ence disci- plines. Although t he releva nce of ontology t o data modeling was acknowledged as early as the 1950s, explicitly on t ological approaches emerged only in the mid - 1980s, when Wa nd and Weber, drawing on Ma rio Bunge’s scientific on- tology, devel oped what became known as the Bung e – Wand – Weber (BW W) ontology (Lukyanenko s.d.). This approach has been criticised for its limitation to the material wo r ld, comprising physical object s with properties independent of human perception. Bunge’s framework excludes hu man intentions, inte rpretations, and m eanings, neglecting the “institu tional reality” that encompasses soci ally cons t ructed 6 J. Palazzo M. de Oliveira entities such as corporations, contracts, and transactions. Since these conceptua l objects, ce ntral to organ isational a nd info rmational contexts, have no represen- tation within a purely material ontology, Bunge’s framework i s considered an inadequate foundation for conceptual modelling in organisational inform ation systems. Ontology, a c entral the me in this di scourse, categorises t he ent ities and rela- tionships within informatio n systems, thereby enh ancing compreh ension of th e conceptual landscape. Philosoph ical consideration s , such as epistemology and ethics, inform how knowledge is generated and ut ilised wit hin these sys tems, enabling a deeper understanding o f thei r functionali ty in soc ial and o rganisa- tional contexts. Sem iotics further en riches this frame work by examining the role of signs in com munication and know ledge representation, ensuring clar ity and precision in conv eying information across d ifferent stak eholders. In contrast to traditional ap proaches grounded in explicit ontological frame- works, LLMs operate wi thin probabilis tic language spaces, p roducing contex t- sensitive representations that lack predefined ontologic al commitment s, raising the question of whether their observed succe ss truly stems fro m an ability to reason over unstruc tured or semi-structured dat a, or rather from the effe ctive internalization of linguistic patterns and contextua l associations that function merely as impl icit ontologi es (Mai, H.T.; Chu, C.X.; P aul heim, H; 2 025). This operational mode necessitates a reconsideration of the very no tion of conceptual modelling, as meaning becomes dynamic and emergent rather than fixed. The conc ept of a generative ontology, therefore, frames kn owledge as continuously construc ted through linguistic interactio n, rather than statically represented. Such a perspective carries significant consequences for interoper- ability and knowledge integration within complex systems, challenging estab- lished assumptions about how conceptual structures are defined, communi- cated, and operation alise d. This tension characteri ses what Thomas Kuhn (Kuhn 1962) referred to as a paradigm crisis. According to t he author, every scientific discipline evolves through l ong periods of normal science, du ring which a relatively s table co n- ceptual framework guides practice, until the accumulation of anomalies and contradictions impedes the framework' s functionin g and imposes a parad ig- matic rupture. In this sens e, it can be stated with reasonable conviction that information systems have reached a critical point: the complexity of the social world and th e integration o f technology in t o everyday life have g enerated a s et of problems that the traditio nal developmen t model can no longer resolve. In this context of epistemo logical and m ethodolog ical disruption, Large Lan- guage M odels (LL Ms) emerge as a transformative response. Th eir generative capabilities offer a novel approach to ontology construction, particularly suited to the intricacies of sociotechnical systems. These systems are characterised by dense i nterrelat ions am ong human actors, institutional structures, and techno- logical artefacts, all of which are documented across vast and h et erogeneou s From Symbol to Meaning 7 textual corpora. LLMs possess the capacity to ingest, interpret, and synth esise this documentation, enabling the automated generation of ontologies that reflect both formal logi c and the fluid sem antics of human pr actice. By bridging the se mantic g ap between h uman and ma chine unders tanding, LLMs facilitate the creation of ontological framework s that are both contextu- ally grounded and comp utationally tractable. This dual capacity addresses a core limitation of traditiona l information sys tems developmen t: the inabil ity to reconcile dynamic social realities with rigid technical schemas. Further more, the adaptability of LLMs enables continuous refinement in response to evolving system requirement s, aligning with t he agile, iterative nature of contemporar y sociotechnical env ironments (Wu and O r 2025). In this aspec t , LLMs do not merely represent a tech nological innovation ; they signify a paradi gmatic shift in th e epistemic foundations of i nformat ion systems. One of the primary advantages of LLMs lies in t heir semantic richness and contextual awareness. Trained on exp ansive and varied corpora, these mod- els can discern subtle linguistic nuance s and complex relationships that are of- ten e ssential for accurately modelling t he multifaceted interactions with in so ci- otechnical syste ms. The generative capab ilities of LLMs, combined wit h t heir proficiency in large-scale documentation analy sis and semantic modell ing, make them partic- ularly well-suited for ont ology generation i n complex sociotechnical systems. Their integration into ontology engineering workflow s promises to signifi- cantly enhance the responsiveness, accuracy, and relev ance of knowledge rep- resentations in dyn amic, interdisc iplinary domains. Ultimately, it is in the interplay between LLMs and h um an expertise that a genuine specifica tion of quality in Informat ion Systems emerges: the mode l contributes vast associative and inferential capacities across heterogeneous data spaces, while the human counterpa r t ensures se mantic gr ounding, contextua l interpretation, and ethica l coherence. Together, they form a hybrid epistemi c process in wh ich computati onal scalabili ty and human i ntentionality converge, enabling the construction of i nforma tion systems that ar e not only techn i cally consistent but al so conceptual ly robust and soc ially meaningfu l. Within this hybrid framework, Retrieval-Augmented Generation (RAG) ar- chitectures play a decisive role by integrating LLMs' generative capa bilities with th e precis ion and reliability o f external knowledg e retrieva l . R AG int ro- duces a dynamic coupl ing b etween sy mbolic r epositori es and probabi listic lan- guage models, allowin g the system to access, fil ter, and reinterpret domain- specific information during the generation process. This coupling t ransforms LLMs from purely statist ical predictors i nto adaptive cogni tive instru ments that can ground their o utputs in verifiabl e and contex tually relevan t data sources. Consequently, w hen e mbedded in human -AI collaboration, RAG enhance s interpretability and acco untability, two dimen sions es sential t o the epistemi c legitimacy of Information Systems Engineering. The resul ting systems not only 8 J. Palazzo M. de Oliveira synthesise knowledg e at scale but also support traceable reasoning, bridging the gap between automated i nferenc e and human understanding. In t his sense, RAG operationalises t he very princip le that defines qua lity in information systems: the continuo us mediation b et ween data -driven i ntellige nce and human discern- ment. 4 Philosophical and Epistemological Considerations Systems thinking in compu t er science does not eme rge in a conceptual void; it is rooted i n a longstanding philosophical and scientific tradition that ex tends from the holistic reasoning of cl assical philosophy to the constructivist and cy- bernetic paradigms of mod er n computation (Rousso 2025). The epistemology of systems thinking i n information systems emphasises the d ynamic and rela- tional nature of knowledge creation and its evolution. Knowledge is not seen as a mere collection of facts but as a construct shaped by interactions within sys- tems. While it s practical expressions appear in areas such as so f tware engineer- ing, information s ystems, and artificia l intelligence, its theoretical s tructure re- flects enduring qu estions about how complex systems can be represented, ana- lysed, and inf luenced. At i ts core, sys tems thinking in compu ting engages with ontological issues concerning the nature of informatio nal entities and processes, epistemologica l questions about how knowledge is formalis ed and val idated i n computational models, and ethical considerations re garding the consequen ces of au t omated decision-making. This ph ilosophical grounding pr ovides the c on- ceptual depth needed to understand the evolving relationship among computa- tion, represen tation, and reality. Philosophically, LLMs challenge the corre spondence model of truth th at un- derlies most in f ormation systems. Their epi stemology is not one o f reference but of coherence, where truth is understood as i nternal consistency with in vast corpora of text. Drawing on Heidegger’s critique of technology (Heidegge r 1977) and Floridi’s philosophy of informat ion (Floridi 2013), th is paper exa m- ines how this n ew epistemi c cond i tion redefines the concept of “knowledge” in computational con texts. Building on this philosophical foundation, t he epistemic shift in troduced b y Large Language M odels (L LMs) invites a re -examinati on of how knowledge is constituted, validated, and operationa l ised within computational systems. The coherence-based epis temology of LLMs diverges from the traditional corre- spondence model, which postulates t hat truth stems from a direct mapping be- tween sy mbolic r epresentations and external reality. Instead, LLMs generate meaning through probabilistic assoc iations and semantic consistency acros s vast textual corpora, privileging internal relationa lity over e xternal referential- ity. From Symbol to Meaning 9 This reconfiguration has profound implications f or the design and in terpre- tation of information systems. Heidegger’s view on technology suggests that we should be cautious not t o treat people and the world around us as m ere tools or resourc es to be exploi t ed. He believed that modern technology encourage s us to view everyth ing, including ou rselves, as some thing to be managed or op- timised, rather than valued for its deeper meaning or purpose. In this light, LLMs challenge t he instrumental rationality that dom inates conventional sys- tem archi tectures, o ffering instead an emergent, contextual, and interpretiv e mode of knowledge pr oduction. Rather than encoding fixed t ruths, LLMs sim- ulate understand ing by dynam ically aligning linguistic patterns, th ereby em- phasising the ontological conditions under which information becomes mean- ingful. Luciano Floridi’s philosop hy of information further illuminates this transfor- mation. His conceptualisation of knowledge as a f uncti on of semantic content, well-formedness, and truthfuln ess aligns w ith the generative logic of LL Ms, although with critica l tension. While Flo ridi emphasises the ethical and epi s- temic responsibilities of informa tional agents, LLMs operate withi n a frame- work where coherence may substitute for veracity, raising questions about the reliability and accountability of machine-generated knowledge. This tension highlights the ne ed for new evaluat ive criteria that extend beyond binary no- tions of truth and f alsehood, em bracing instead a spectrum of ep i stemic virtues, such as relevanc e, transparency, and contextual fidelity. In compu tational con t exts, this epistemic condition redefines knowledge n ot as a static repository of facts but as a fluid, dialogical process shaped by inter- action, interpretation, and iteration. LL Ms exemplify this s hift by enab ling sys- tems t hat learn, adapt, and r es pond to e volving discursive hu m an-algo rithm e n- vironments. They do not merely store or retrieve information ; they participate in th e con struction o f meaning, blu r ring the boundaries between data, knowledge, an d understanding. Consequ ently, the integration of LL Ms into in- formation systems marks a par adigma tic departure from representationalis m models, inviting a more reflexive and philosophically grounded approach to digital epistemol ogy. This epistemological shift, however, carries profound implications whe n LLMs interact with users whose beliefs diverge from established scientific con- sensus. In such interactions, the model’s adaptive linguistic behaviour may un- intentionally va lidate and r einforce non-empirical narrativ es, as cohe rence within dialogue i s privileged over corre spondence with objective reality. With- out mechanisms of epistemic correction, the conversational process risks be- coming a feedb ack loop of opinion, in wh ich discursive plaus ibility substitute s for empirical verification. Semiotics offers a crucial lens for understanding LLMs' representational mechanisms . While Peirce’s triadic model (sign– ob- ject – interpretant ) (Liszka 1996) presupposes an i nterpre tive subject, LLMs simulate interpre tation algori thmically. 10 J. Palazzo M. de Oliveira From a co mputational perspective, t his phenomenon reveals a structural l im- itation in curr ent generative architectures: th eir optimisation for linguistic con- sistency rather than epistemic integrity. Much like the phenomenon of co l lec- tive hallucination in huma n cognition, an LLM align s semanti cally with the user’s expre ssed worldvi ew, generating respo nses that maintain dialogica l har- mony but may drift from factual grounding. These dynamic positions the model within what Parmenides 1 des cribed as the Path of Opinion, a domain of seem- ingly rational discou r se det ached from the p ursuit of truth. In the philosophy of information, this situation undersc ores the necessity of designing informational agents capable of epistemic self-regulation, that is, sys- tems that can not only pr oduce coheren t linguistic outputs bu t also evaluate their corresponde nce with verified know ledge struc tures. Embeddin g such mechanisms requires a synthesis of systems thinking, epistemology, and com- putational ethics, ensuring that digital knowledge production remain s anchored. The cha l lenge, therefore, is not m erely tec hnical bu t conceptual: to engineer systems t hat preserve the fluidity of dialogical interaction while maintainin g fidelity to the emp i rical wo rld that sustain s meaningful know ledge. 5 Ethical and Societal Implications As Large Language Models (LLMs) and other AI systems become embedded in critical in frastructures, such as education, h ea lth, fi nance, and governance, their ontological opacity introdu ces new layers of ethical complexity. Un like traditional automa tion, which is constrained by explicit pr ocedural logic, gen- erative models operate through probabilistic inference within vast semantic spaces. This opacity challenges th e principles of accountability and transp ar- ency traditionally expected in corporate and institutional governance. Ethica l evaluation, therefore, must extend beyond compliance and data privacy to en- compass epist emic justi ce and t he pr eservation of human interpretive auton- omy. Moreover, it addresses the ethical dimens ion: if meaning is generated al- gorithmically, how sh oul d responsibility and au thorship be re thought in the de- sign of intellig ent systems (Stah l 2023)? From the standpoint of corporate social responsibility (CSR), o rganisations that adopt LLM -based solutions must acknowledge their dual role: as techno- logical innovators and as moral agents with in the so ciotechnical ecosystem. Bias, misinformation, and the reinforcement of structural inequalities are not merely technical faults, but refle ctions of epistemi c asymmetries enco ded within the model’s training data. Addressing these asymmetries requires m ore than corrective algo rithms; it de mands a framework guided by philosophic al 1 Greek pre-Socratic philosopher (530 BC to 460 BC) From Symbol to Meaning 11 clarity and mo ral pruden ce, ensu r ing t hat de si gn choices align with h uman val- ues and democra tic accountability. Transparency of Meaning – Systems m ust be des i gned to make th eir infer- ential processes, limitations , and episte mic boundaries intelligib le to users and stakeholders. This transparency is not only procedural but semantic, recognis- ing that meaning maki ng is a human prero gative that cannot be fully del egated to machines (Russo, Sc hliesser and Wagemans 2024). Plurality of Ontologies – Information systems must respect the div ersity of cultural, linguistic, and interpretive frameworks through which human commu- nities construc t knowledge. The impo sition of a single computational ont ology risks homogeni zing meanin g and erasing epistemic d iversity. Preservation of Interpretive Autonomy – Automation should augment, not replace, human judgment. Decision- suppor t systems must preserve users’ ca- pacity to interpret, delibera te, and di ssent, the reby ma intaining t he eth ical di- mension of resp onsibility with in human – machine inter action. These pr inciples reposition the debate on AI ethics from a compliance -ori- ented s tance to an ontolog ical eth ics of responsibility. Rather than fo cusing ex- clusively on technical transparency or data governance, this approach seeks to articulate how AI can coexist with hu man interpretive freedom. U ltimately, th e ethical maturity of AI-driven systems will depend less on the sophistication of their a lgorithms and more on organisation s' commitment to de signing, imple- menting, and regulating them in ways that respect the plurality of meanings and the moral agen cy of human being s (Gordon and Nyhol m 2021). The ethical dimension of automation in organ isational cont exts extends be- yond technical implementation and encompasses profound implications for both hum ans and culture. E mployees who se tasks are subjec t to automation of- ten experience anxiety and resistance due to the perceived threat of job loss . Such rea ctions are not merely individual but are rooted in broader organ isa- tional cultures that may perceive automation as a disruption to established prac- tices and professiona l identities. Conseque ntly, any process of technolog ical integration demands caref ul et hical reflection on how to mitigate these fears and ensure fair adaptation. From an e thical standpoint, it is essential to design automation strategies that respect workers' dignity, provide opportunities for retraining, and promote transparent commun ication about the goals and consequences of technolog ical change. The t ransition to automated process es m ust therefore be guided by prin- ciples of justice, empathy, and inclusion, recognising t hat res i stance frequently arises from legitimate concerns about security and belonging. Addressing these issues requires not onl y managerial foresight but also moral res pons ibility fo r employees' psychological well-being and for p reserving a healthy or ganisa- tional culture. 12 J. Palazzo M. de Oliveira 6 Technical aspects In recent years, the rapid developmen t of large l anguage models has trans- formed the way computational systems gene rate, interpret, and apply knowledge. These models have demonstrated rem arkable capabilities in pro- ducing fluent an d coheren t t ext, solving co mplex problems, an d adapting t o a wide variety of doma ins. However, despite their versatility, such m odels remain limited by their i ntrinsic dependen ce on the dat a used during t raining. Once deployed, th ey operate witho ut direct access to external source s of infor- mation, relying exc l usively on interna lised representati ons t hat may not always reflect current, accurate, or contextually appropria te knowledge. This limitation often r es ults in ou tputs that are plausible in for m but factual ly i nconsistent o r disconnected from the specific demands of a given task. From a technical per- spective, this i ssue arises because mos t large langua ge models function in a closed inference regime, where parameters e ncode statistical correlations de- rived from pretraining corpora rather than dynamica lly validated facts. The em- bedding space, t hough rich in semantic associations, remains static after train- ing, preventing the system from updating its representations in response to novel or evolv ing data. Moreove r, the context w i ndow used during inferenc e limits the model’s capacity to retain or cross -reference extended informationa l contexts, resulting in partial or f ragmented reasoning chains. These structural constraints, combined with the a bse nce of retrieval or v er ification mechanis ms, explain phenome na such as hal lucination, do main drift, and semantic overgen- eralization, where generated content prioritises linguistic plausibility over em- pirical correctness. Con sequently, withou t i ntegra tion of external retrieval, feedback loops, or groundi ng in structured knowledge bases, such models risk producing outputs tha t mimic understanding wh i le lacking the ep istemi c ro- bustness required f or relia ble decision-making in speciali sed or dynamically changing domains. To overcome this challenge, recent research has proposed integrating re- trieval-augmented generati on as a means of grounding mo del ou tputs in rele- vant and verifiable i nform ation (Gao, e t a l. 2024), r etrieval-aug mented gene ra- tion (RAG). In this framework, the generative pro cess is coupled with an infor- mation retri eval mechanis m that identifies and s elects texts, examples, o r solu- tions that bear con t extual similarity to t he problem at hand. By combining re- trieved evidence w it h the m odel’s generative reasoning, it bec omes possible to produce responses that are both creative and factually aligned, thus bridging the gap between prob abilistic infer ence and grounded kno wledge. This approach is ground ed in two foundationa l observations. First, a wide range of programming an d reasoning tasks exhib it profound structur al com- monalities, not only in their formulation but also in the cognitive and a lgorith- mic strategies required to resolve them. S econd, empirical evidence sugges ts that LLM performance can be substantially improved when they are p r ovided From Symbol to Meaning 13 with c ontextually relevant exemplars or analogous problem cases before gen- erating a solutio n. The concept of RAG rep resents one of the most significant advancements in the recen t ev olution of intell igent sys tems, p articularly in thei r ab ility to con- nect symbolic know ledge with sta tistical language model ling. By integrating external knowledge retrieval with generative modelling, RAG establishes a hybrid cognitive ar chitectur e: it com bines the precision of in for- mation retrieval (IR) methods with t he contextual adaptability of l arge language models ( LLMs). In practical terms, when a query or prompt is introduced, the RAG framewor k first retrieves relevant infor m ation from a curated knowled ge base or document repository, grounding the subsequent generation phase in ver- ifiable and contextually aligned content. This mechanism mitigates one of the central limitations of purely generative sys tems, namel y, the ir tendency toward hallucination or ungrounded inference, by constraining the model’s reasoning within a corpus of externally val idated data. The integration of retrieval and gener ation produces a dynamic interacti on between structured and unstructu red representations. While the r etrieval com- ponent ensures factual cohe r ence and domain relevance, the gen er ative compo- nent synthesises this information int o coherent, human-like discourse. As a re- sult, RAG does not merely recall or reproduce existing data; it reconstruct s knowledge through contextual reasoning, enabling adaptive proble m-solving across complex computatio nal domains, such as p rogramming, data analytics, scientific modelling, and decision s upport. In this sense, RAG provi des a promisin g framework for bridging the gap between abstract reasoning and concrete application s. It operationalises the transition fr om symbolic i nference (anchored in explicit da t a) to probabilisti c interpretation (characteristic of LLMs), thus aligning machine reasoning more closely with human cogn i tive processes. This conve rgence enables systems n ot only to generate li nguistically coherent r esponses but al so to reason effe ctively over retrieved evidence, leading to more robust , explainable, and semantically grounded outpu ts. When a system retrieves analogous problems and their corresponding solu- tions, it effectively provides the model with a cognitive platform that support s reasoning through comparison and analogy. This process not only enhance s performance but also contributes to ep istemic transp arency, as the so urces of information beco m e part of the reasoning context. From a mathematical persp ec tive, t he ef ficacy of retrieval-augmented gen- eration i n problem-solving domains rests on princip les from discrete mathemat- ics, linear algebra, and probabi lity t heory. Structured problem ins tances can of- ten be represented as graphs, matrices, or formal expressions, enabling the iden- tification of isomorphisms or structural similarities be tween pr ior and current tasks. A lgorithmic patterns, such as recurrenc e relat ions, comb inatorial struc- tures, or op timisation strategies, provide a fo rmal substrate f or r etrieva l 14 J. Palazzo M. de Oliveira mechanisms to operat e on, guid i ng the selection of re l evant a nalogues. Mo reo- ver, probabilistic models underlie the scoring and ranking of retrieved in- stances, allowing the system to weigh the likelihood that a particular prior so- lution will generalise effectively to a new context. This mathematica l ground- ing ensures that the retriev al process is not merely as soc iative but systema ti- cally aligned with the domain's formal properties, thereby reinforcing both the reliability and in terpretability of the reasoning produce d. By aligning linguistic generation with structured retrieval, this approach rep- resents a significant step toward a new generation of intelligent systems , ones that are no t mere ly p robabilistic but epistem ically grounded and co ntext aware . To address these l imitations , retrieval-augmented generation has emerged as an effective method for conn ecting la rge language m odels with relevant external knowledge. In this approach, the model does not rely solely on its internal pa- rameters but also utilises an information retrieval comp onent to search a struc- tured repos i tory fo r texts li ke the current pro blem. Th e retrieved information is then combined with the model’s own generative capa ci ty, allowing it to pr o- duce answers that are more precise , coherent, and co ntextually grounded. 7 Considerations and Future Directions The p receding d i scussions converge towa r d a sha red u nderstanding that the fu- ture of Inform ation System s Engineering ( ISE) dem ands a recon figuration of its conceptual, m ethodolog i cal, and eth ical foundatio ns. The convergenc e of intelligent technolog ies, societal transfor mation, and environmenta l urgency compels the discipline to arti culate a r enewed vision anchored in systemic co- herence, epistemic responsibility, and sustainable innovation. The followin g considerations de lineate the esse ntial directions f or future action. ISE mus t advance beyond its traditional te chnological orientation to i ncor- porate sus tainability as a fundamental prin ciple, rath er than an external con- straint. The f orthcoming generation of engineered systems should be self -adap- tive, ene r gy-efficient, and resili ent, capab l e o f operating within comp l ex e co- logical and socio-technical contexts. This evo lution requires embe dding sus - tainability metrics into the earliest stages of design, ensuring that resource o p- timisation and environmental stewardship become intrinsic components of en- gineering decisions. The discipline must therefore align its methods with globa l imperatives of sus tainability and ethica l res ponsibi lity, redefining su ccess no t only in terms of e fficiency but a lso in terms of long-term societal benefits. The expans i on of s ocietal demands, ranging from digital inclus ion to equita- ble access to infor mation, necessitates a sy stemic reconceptual isation o f design goals. Future systems must evolve as dyn amic agents capable of co-adaptation with changing social structures and values. This entails synchronising individ- ual system life cycles with collective challenges such as public heal th, From Symbol to Meaning 15 education, and social equity. The methodologica l response should f oster inter- disciplinary collaboration, br idging engineering, social sc iences, and et hics to address ill-struc tured problems that de fy traditional technical bounda ries. Artificial intelligence and machine learnin g are not merely instruments of automation but episte mic a gents 2 tha t reshap e the way system s are conceived and governed. Their deployment intro duces profound ethical challeng es con- cerning autonomy, acc ountability, and the transformati on of human labour. The future practice of ISE m ust institutionalise ethical risk assessment as a funda- mental s tage in the design process, ensu r ing th at innovat ions contr ibute posi- tively to human welfare. Governance mechanisms should promote transpar- ency, explicab i lity, and fairness in automated decision-making, cou nt erbalan c- ing the asymmet ries that emerging t echnolog i es may exacerbate. Global t echnological interdependence ren ders trad itional go vernance f rame- works inadequate. The fu t ure of system s engineering lies in construc ting poly- centric go vernance models , which integrate govern ments, aca demia, c i vil soci- ety, and the private sector in a shared responsibility for technological evolution. Such models must foster open dialogue and cooperative regulation t o manage uncertainty, mitigate systemic ri sk, and uphold t he primacy of the public good. Governance, in this sense , becomes an adaptive process that co-evolves with technological and societal t ransformations. Finally, ISE must cultivate a reflexive stance, acknowledging that techno- logical systems are not neu t ral art efacts bu t active participan t s in shaping hu- man knowledge and social order. Th is recogni t ion demands a continuous re - examination of the epistemological assumptions underlying engineering prac- tice. The discip line’s renew al thus depends on its capacity to integrat e philo- sophical re flection, ethical fores ight, and empirical rigour into a unified frame- work of respons ible innovation. 8 Conclusion The reflections deve l oped in this paper converge on a central claim : Infor- mation Systems Engineering (ISE) must confront the epistemic and ontologica l consequences of the co mputational transformat ions it now em bodies. T he emergence of Lar ge L anguage M odels (LLMs) represents far m ore than a tech- nological innovation; it signals a profound shift in how knowledge, mean i ng, and rep resentation are conceived within infor mation s ystems. Traditional engi- neering paradigms, g rounded in stable requirements, deterministi c logic, an d 2 The expression epistemic a gents designates entities that ca n engage in epistemic activities, that is, activities relate d t o the acq uisition, production, j u stification, representation, and comm u- nication o f knowledge. The concept is rooted in E p istemology, where it is used t o characterise those who can h old beliefs, evaluate evidence, and revise their cognitive states in light of new information. 16 J. Palazzo M. de Oliveira fixed ontological sche mas, are increasingly unable to c apture the fluid, dialog- ical, and context-sensitive nature of contemporary digital ecosystems. The c on- vergence of philosophy, ontology, and systems thinking reveals that the current evolution of computational technologies cannot be adequately understood within traditional paradigms. These models challenge t he co rrespondence view of truth, reconf igure the processes of meaning-making, and redefine the ve ry conditions under which know ledge is produced, validated, and applied in digital environments. The epi stemic and ethical implications of this t ransfo rmation demand a reorientation of the discipl ine toward reflexivity, t ransparency, and accounta- bility. Information systems can no longer be conceived merely as technical ar- tefacts; they are ep i stemic agents that participate i n t he construction of social reality. Their design, therefore, entails philosophical responsibility. The inter- play b etween ontological clarity and computational generativity must be con- sciously managed to ens ure that the sys tems we engineer con tinue to serve hu- man values, intel lectual integrity, and democrat ic participation. Philosophy of science provides the critical framework for this reorienta tion. It reminds us t hat science, and by e xt ension, comp uting, is a hu man endeavou r governed not solely by logic and eff iciency, but by i nterpretive frameworks, ethical commi tments, and socia l contexts. Likew ise, ontology su pplies the for- mal rigour required to structure knowledge, while semi otics mediates between meaning and r epresentat ion. These perspective s, when integrated within the en- gineering of information systems, foster an understanding of computation that is not reductioni st but systemic, no t instru mental but interpre tive. Ethically, this synthesis reaffirms the primacy of hu m an i nterpretiv e auton- omy. In a landscape increasingly dominated by algorithmic mediation, t he preservation of human judgmen t, plurality of ont olog ies, and transpa rency of meaning become moral imperativ es. The respon sibility of researchers, engi- neers, and organisat ions extends beyond innovation; it encompasses cultivating epistemic justice and prevent ing informational asymmetries that undermine public trust and in tellectual diversity. Acknowledgments. This study was partially funded by the Brazilian Conselho N acional de Desenvolvimento Científico e Tecnológic o (CNPq) by g rants 4 02.086/2023-6 and 3 06.695/2022- 7. Disclosure of Interests. The author has no competing interests to declare that are r elevant to the content of t his article. During the drafting of this paper, LLMs were utilised to assis t in edi ting and condensing partial drafts written b y the au thor. The aut hor then further edited and refined the text with Grammarly. Such use and thi s ackn owledgement adhere to the ethical guidelines for the use of generative AI in academi c research. The author has developed the work From Symbol to Meaning 17 entirely, which has been thoroughly vetted for accuracy, and assumes respon- sibility for the in tegrity of their con t ributions. References Autili, Ma rco , Martina De Sanctis, Paola Inverardi, and Patrizio P elliccione. “Engineering Digital Systems for Humanity: A Research Roadmap.” ACM Transactions on So ftware Eng i neering and M ethodology , 26 May 2025, 5 ed.: 1 - 33. Bathaee, Yavar . “The Artificial Intelligence Black Box and t he Failure of Intent and Causation.” Har vard Journal of Law & Technolog y , Spring 2018, 2 ed.: 890-938. Feyerabend, Paul. A gai nst Method: Outline of an Anarchistic Theory of Knowledge. London: New Left Books, 1975. Floridi, Luciano. The Ethics of I nforma tion. G reat Clarendon Street, Oxfo rd: Oxford Univers ity Press, 2013. Gao, Yunfan , et al. "Retrieval-augmented generation f or large language models: A survey. " arXiv preprint. Ma r 27, 2024. Gizzardi, Gi ancarlo. Ontolog i cal Foundat i on for Structural Conceptua l Models. Twente,: Center for Telematics and Informat ion Technology, University of Twen t e, 2005 . Gordon, John- St ewart , an d Sven N yholm. “Eth ics o f Arti ficial Intelligence.” Internet Encycloped ia of Philosophy. Feb. 2021. https://iep.utm.edu /ethic-ai/ (access ed 10 19, 2025). Heidegger, Martin. “The Question Conce rning Techno logy.” In Basic Writings , edited by David Farell Krell, 287 -317. New York: Harper & Row, 1977. Kuhn, T. S. The St ructure of Scientific Revolut ions. Chicago: Univer sity of Chicago Press, 1 962. Lakatos, Imre. Proofs an d Refutations. Cambridge: Cambridge Unive rsity Press, 1976. Liszka, James J akób. A General Introduction to t he Seme i otic of Cha rles Sanders Pe i rce. Bl oomington a nd Indi anapolis: Indiana Universi ty Press, 1996. Lukyanenko, R., Storey, V.C. & Pastor, O. “Foundations of information technology based on Bung e’s systemist philosophy of reality .” Softw Syst Model , n.d.: 921 – 93 8. Mai, H.T.; Chu, C.X.; Paulheim, H;. “Do LLMs Really Adapt to Domain s? A n Ontology Learning Perspective.” Edited by G., et al. Demartini. Lecture Notes i n Computer Science, The Semantic Web – ISWC 2024. Springer Nature Sw itzerland, 2025. 12 6 -143. 18 J. Palazzo M. de Oliveira Malinowski, Bron islaw. “Practical Anthropology.” Journal of the International African Institute , Jan. 1929 : 22-38. Mittelstadt, Brent Daniel , Patr ick Al lo, M ariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. “The ethics of algorithms: Mapping the debate.” Big Data & Societ y , 2016, July – Dece m ber ed.: 1 - 21. Popper, K. The logic of scien tific discovery. New York, NY: Basic Books, 1959. “Chapter 3 – Philosophical Foundations of Systems Thinking.” In System s Thinking , by Dotan Rousso. Calgary, AB: So ut hern Alberta Institute of Technology, 2025. Russo, F rederica, Eric Schliesser, and Jean Wagemans. “Connecting ethics and epistemology of AI.” AI & Society: Knowledge, Culture and Communication , 2024 : 1585 – 1603. Stahl , Bernd Carsten . “Embedding responsib ility in intelligent systems: fro m AI ethics to resp onsible AI ecosyst ems.” Scientific Reports , 18 M ay 2023. Vallor, S hannon, ed. T he Oxford Handb ook of Philo sophy o f T echnology. Ne w York, NY: Oxford Un iversity Press, 2022. Von Bertalanffy, Ludwig. “The history and status of general systems theory.” Academy of manag ement journal , 1972 : 407-426. Wangler, Benkt, and Alexander Backlund. “Informatio n systems engineering: What is it?” CAiSE 2005 Workshops. Porto, Portugal: FEUP Edições, 2005. 427-437. Wu, Ju , and Calvin K. L. Or. “Position Paper: T owards Open Complex Human – AI Agents Collaborat ion Systems for Problem Solving and Knowledge Man agement.” arXiv. 9 Oct 2025. Yearworth, Mike . “The theoretical foundation(s) for Systems Engineering?” Systems Research and B ehavioral Science , Jan. 2020 : 1-4.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment