Introduction
In this weeks lab, my task was to add 2 more features to my release 0.1 project. The catch is that each feature needed to be worked on in a separate branch, then the branches needed to be merged in the main branch. In this blog post I will go over the changes that were made, how they were implemented, and ultimately how they were merged into the main branch in the end.
Feature 1: Exit codes and error messages
My first feature was to add appropriate exit codes and error messages in the program. When the program exits, it will either provide an exit code of 0 (meaning there were no errors) or an exit code of 1 (meaning there was an error) and provide an error message in this case. Of the 2 features, this was much easier to implement as I didn’t need to make a lot of new code or relearn anything. All I needed to do was add some lines where the program would exit:
process.exit(0)
In cases with errors, there would be an error message above it:
console.error(`Error: File ${fileName} not found`);
process.exit(1);
Overall this feature took me maybe around 10 minutes to complete, giving me plenty of time to focus on my next, much harder feature. Below is the issue and associated commit:
Feature 2: Changing and using other models
For the second feature, I needed to add a new command. It’s the model change command, or –mc. with this command, the user can specify which model they want the AI to use by having an integer number as the argument afterwards.
0: “llama3-8b-8192”
1: “mixtral-8x7b-32768”
2: “llava-v1.5-7b-4096-preview”
By default, model 0 is used (llama3-8b-8192) but the file can be generated with the other 2 models if the user does –mc 1 or –mc 2. The user can also enter –mc -1, which would generate one README file using each model. The token information of each model would also be displayed if the user specified it with –t.
--mc 1
--mc 2
--mc -1
--mc -1 --t
This feature was much harder to implement since I had to make sure the model was properly changed, I had some issues with the program recognizing which model to use. Plus the option to generate one file per model took some slight restructuring of the program to use, putting the function to create readmes and the function for token generation in a for loop. Overall this feature I had to spend a decent amount of time on, but I was able to get it sorted out. Below is the issue and associated commit:
Merging
When merging, I did run into some merge conflicts. Since there was so much extra code in the second feature that overlapped the code in the first. Sorting out the merge conflicts were a bit confusing, and I didn’t quite understand how to do them at first. But after a bit of trial and error, I got it to push with both the new features. There is the link to the merge commit:
Conclusion
This was a good lab for practicing merging multiple branches. While I am familiar with pull requests for single branches, I’ve rarely dealt with pushing multiple branches at the same time and fixing the associated merge conflicts. It was good to add one simple feature and one complex feature, as it made the merge conflict easier to manage. Overall I think I have learned a lot from this lab, and in the future I will challenge myself with multiple, complex feature branches.
Source link
lol