Skip to main content
  1. Writings/

The Rise of AI Pair Programmers

AI Copilot

Challenges and the Future of Coding with AI Pair Programmers

The research paper explored the influence of emerging generative tools, especially GitHub Copilot, and assessed their efficiency relative to coding performed by humans. I find this an enthralling area of study. The rapid advancements in the generative landscape have raised concerns about these tools potentially replacing humans in coding, development, and even supportive roles. This study delved into evaluating the truth of such apprehensions.

From the research, my overarching conclusion is that the results are multifaceted. The authors aptly highlight that:

Copilot can pose risks when relied upon by novice developers, as they might not discern its suboptimal or flawed suggestions due to their limited expertise.

The study presents scenarios where Copilot demonstrated remarkable prowess and others where it was less effective. One limitation is the model’s attention capacity. For instance, Chat GPT-4 is restricted to 8000 tokens, which may not encompass the complexity of larger projects. Although there is ongoing research to enhance the attention window of Large Language Models (LLMs), there are associated cost challenges. A noteworthy observation is that some companies have started training these models on their own datasets, indicating a promising direction for application.

A compelling use case is the application of these tools in bridging knowledge gaps. An example that stands out is the WatsonX Code Assistant for Z. COBOL, despite being a legacy language, is foundational for many governmental systems. The decreasing number of COBOL-proficient developers is concerning. Innovations like the WatsonX Code Assistant aim to bridge this deficit, marking a significant leap forward.

While innovative generative tools have made significant strides in coding and development, do they offer holistic solutions? My perspective is that they don’t. The research consistently indicates that even as these systems facilitate solution implementation, the outcomes aren’t exhaustive. The indispensable role of a human coder remains critical to guaranteeing the security and efficiency of the solutions.

I recently ventured into deploying a Jekyll site for Bokeh plots. Though deployment was straightforward, I sought to amalgamate two distinct themes. This entailed sifting through extensive code to merge theme elements seamlessly. Enter Copilot in Visual Studio: it significantly streamlined this task. By highlighting sections, I could prompt Copilot for explanations, which enhanced my understanding of the code. During an initial trial, I tasked Copilot with merging the config files of two Jekyll sites while eliminating redundancies. The result was less than perfect—it omitted numerous styling elements and references. However, by framing my questions judiciously and applying an intuitive approach to the code, the process became more enlightening. Rather than turning to documentation or online searches, Copilot allowed me to craft a Python script for a seminar project effortlessly. However, while its insights were valuable, it lacked the contextual understanding to build it comprehensively, but served as a useful resource for me to understand it step by step

Reflecting on our last classroom discussions: “No one writes new code; we only reuse it and improve it.” Copilot epitomizes this philosophy. Yet, the divergence arises in supervising these tools. As emphasized by the study’s authors, the tool’s utility is magnified when it’s viewed as a supplement rather than a replacement. If we excessively depend on its capabilities, we risk stagnating our learning and engagement. While Copilot can generate code, urging it to review and evaluate my work deepens my comprehension. Its current state doesn’t encapsulate the reasoning and logic inherent to human cognition. Consequently, our role transforms from mere coders to full stack guides, steering our companion AI toward pertinent and precise results. The potential benefits are manifold. Merging its capabilities with traditional research methods expedites problem-solving. In my experience, feeding it intricate error messages often yields concise explanations and occasionally, rectification steps.

Utilizing generative tools in the right manner offers unparalleled advantages. It accelerates the learning trajectory, empowering individuals to venture into unexplored territories and pose challenging questions. Nevertheless, overdependence can erode our innate abilities to innovate and reason. The essence of human value resides in our capacity to ideate and manifest. Delegating this innate creativity dilutes our significance. However, this shouldn’t deter us from embracing powerful tools that amplify our potential and foster simultaneous learning.

Conclusion #

A sobering realization is our replaceability. Advanced models will invariably surge in precision as increased expertise is channeled into training and iterative enhancement. Consider Stack Overflow’s endeavor to distill its expansive knowledge base into a singular chatbot. Similarly, the longstanding finance forum that I frequent, Wall Street Oasis, has unleashed an AI bot proficient in contextual replies by harnessing the cumulative insights of its 50-year-old forum posts and replies. This metamorphosis underscores the tremendous utility of tools like Copilot for adept professionals. These tools also serve as catalysts for accelerated learning and sustained relevance. The inexorable progression of time sees the quantum of tasks swelling, mirroring the phenomenon where work tends to expand to fill the available time. This dynamic landscape favors the inquisitive, urging them to fathom the underlying “whys.” Those indifferent to this evolution risk obsolescence. In my perspective, this scenario epitomizes the age-old principle of survival, albeit reframed in a modern context: survival of the most adaptable and inquisitive.

These reflections pivot to a critical inference: while AI might not supplant humans, the onus is on us to perpetually refine and expand our horizons. The crux should center around relentless knowledge acquisition, skill enhancement, and evolution. If stagnation ensues, the overarching objective becomes nebulous. Furthermore, an ethical dimension emerges. A predominant apprehension among developers is the perception of AI-generated code as derivative or plagiarized. Yet, discerning the distinction between AI output and a developer gleaning code fragments from online repositories becomes blurry. Both entities essentially mimic a similar pattern. Many developers, myself included, initiate their coding journey by dissecting existing code, occasionally via unauthorized avenues. This hands-on exploration has enriched my understanding. A custom feature I’ve integrated prompts ChatGPT to generate five follow-up questions post every output(included below for this essay). These questions not only encourage deeper contemplation but also stimulate the exploration of new perspectives and tangents. Tools, regardless of their sophistication, derive their effectiveness from the adeptness of the user. Deploying them without understanding their functionality or the related domain dilutes their utility and over-reliance dilutes human ingenuity

The different combination of follow up questions I got as I edited the different paragraphs of the above reflection for grammar, spellings and punctuation #

  • How might AI tools be improved to encourage collaboration without risking over-reliance?
  • Do you foresee a point where AI-generated code might be universally accepted without concerns of plagiarism?
  • What strategies can human developers employ to ensure they remain indispensable in the AI-augmented workspace?
  • How can the educational sector adapt to train budding developers in the era of advanced generative tools?
  • Can AI ever truly replicate human intuition and creativity, or will it always require a human touch?
  • How do you see the evolution of tools like Copilot impacting the coding education sector?
  • How might developers maintain a balance between leveraging AI tools and ensuring they continue honing their skills?
  • Are there potential ethical considerations when relying heavily on AI for coding solutions?
  • What other areas, outside of coding, might benefit from similar generative tools?
  • How might AI tools be designed to encourage human-AI collaboration rather than dependency?
  • How do you envision the future of coding with the proliferation of tools like GitHub Copilot?
  • Are there potential drawbacks for seasoned developers when using such generative tools?
  • In what other sectors, apart from tech, could these generative tools find applications?
  • How might training on proprietary datasets affect the outcomes and reliability of generative tools?
  • Are there ways to merge the strengths of both humans and these generative tools to achieve optimal results?
  • How might the evolution of generative tools impact the job market, especially in the tech sector?
  • What are the ethical implications of relying heavily on tools like GitHub Copilot for coding?
  • How might organizations ensure that novice developers use such tools effectively and safely?
  • In the context of older programming languages like COBOL, how can generative tools help bridge the knowledge gap and ensure the continuity of legacy systems?
  • Are there any ethical considerations to be aware of when using generative tools for software development?
  • How does the current technological environment influence people’s curiosity and willingness to learn?
  • In what ways can individuals stay relevant in an ever-evolving digital age?
  • How can organizations ensure their employees maintain a continuous learning mindset?
  • Are there any downsides to a “survival of the fittest” mentality in a professional or educational setting?
  • How can we cultivate a culture that emphasizes both efficiency and in-depth understanding?
  • What are the potential long-term implications of relying heavily on AI tools like Copilot for coding tasks?
  • How do you see the future evolution of AI tools in enhancing or potentially replacing certain job roles?
  • In what ways can AI tools be integrated into educational systems to enhance learning?
  • What measures can be taken to ensure that the use of AI tools doesn’t compromise the foundational understanding of a subject?
  • Are there ethical considerations to be addressed when relying on AI-generated outputs, especially in professional settings?
  • How can we ensure that AI-generated code is ethical and not plagiarized?
  • What measures can be implemented to ensure human developers continue to stay relevant in the face of advancing AI?
  • How do you differentiate between learning from others’ code and direct plagiarism?
  • In what ways can AI tools be used to complement human developers rather than replace them?
  • Are there any current standards or guidelines in place for ethical AI development in the coding industry?
  • What are the potential long-term impacts of generative tools like GitHub Copilot on the software development industry?
  • How can developers ensure that they are using tools like Copilot effectively without becoming overly reliant on them?
  • How might the evolution of these generative tools change the landscape of software development education and training in the future?
  • How might the evolution of generative tools like Copilot impact the curriculum of coding boot camps or computer science degrees?
  • What measures can be implemented to ensure that novice developers using Copilot don’t become overly reliant on it?
  • Could the introduction of AI in coding lead to new specialized roles in the tech industry, such as AI code reviewers or AI-enhanced software architects?
  • As generative AI tools evolve, how might the tech industry’s hiring criteria shift in terms of assessing a coder’s proficiency?
  • Beyond coding, in what other professional fields do you foresee AI tools making significant inroads in the coming years?